A purely data-driven framework for prediction, optimization, and control of networked processes: application to networked SIS epidemic model

08/01/2021
by   Ali Tavasoli, et al.
0

Networks are landmarks of many complex phenomena where interweaving interactions between different agents transform simple local rule-sets into nonlinear emergent behaviors. While some recent studies unveil associations between the network structure and the underlying dynamical process, identifying stochastic nonlinear dynamical processes continues to be an outstanding problem. Here we develop a simple data-driven framework based on operator-theoretic techniques to identify and control stochastic nonlinear dynamics taking place over large-scale networks. The proposed approach requires no prior knowledge of the network structure and identifies the underlying dynamics solely using a collection of two-step snapshots of the states. This data-driven system identification is achieved by using the Koopman operator to find a low dimensional representation of the dynamical patterns that evolve linearly. Further, we use the global linear Koopman model to solve critical control problems by applying to model predictive control (MPC)–typically, a challenging proposition when applied to large networks. We show that our proposed approach tackles this by converting the original nonlinear programming into a more tractable optimization problem that is both convex and with far fewer variables.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

10/15/2020

Deep Learning of Koopman Representation for Control

We develop a data-driven, model-free approach for the optimal control of...
03/03/2021

LQResNet: A Deep Neural Network Architecture for Learning Dynamic Processes

Mathematical modeling is an essential step, for example, to analyze the ...
01/13/2022

Data-Driven Modeling and Prediction of Non-Linearizable Dynamics via Spectral Submanifolds

We develop a methodology to construct low-dimensional predictive models ...
02/08/2022

Data-Driven Chance Constrained Control using Kernel Distribution Embeddings

We present a data-driven algorithm for efficiently computing stochastic ...
12/13/2020

Mixed interpolatory and inference non-intrusive reduced order modeling with application to pollutants dispersion

On the basis of input-output time-domain data collected from a complex s...
12/22/2019

A Framework for Data-Driven Computational Dynamics Based on Nonlinear Optimization

In this article, we present an extension of the formulation recently dev...
02/09/2021

On the Universal Transformation of Data-Driven Models to Control Systems

As in almost every other branch of science, the major advances in data s...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Identifying dynamical systems from observations is central to many scientific disciplines, including physical, biological, computer, social sciences, and economics; and opens doors for engineering and interventions (Wang et al., 2016). Although inferring the network structure may be impossible in practice, as different networks may display similar dynamical behavior, proper identification of the dynamics remains feasible (Prasse and Mieghem, 2020). Besides enabling accurate prediction of the network process, the identified model must be well suited for practically implementable strategies to control the underlying dynamics that are both stochastic and nonlinear.

Deducing laws that govern the relationship between a system’s structure and functions is a formidable challenge due to difficulties associated with nonlinearities, stochasticity, high-dimensions, and inherent correlations between the network topology and the underlying dynamical process. As such, approaches such as mean-field approximation (Van Mieghem et al., 2009; Sahneh et al., 2013)

have been proposed during recent years to model several dynamical processes over networks and offer good estimations under limited circumstances. On the other hand, rapidly developing information technologies leave us with a wealth of data over larger and more diverse networks; further spurring data-driven approaches to construct models without requiring explicit prior knowledge when studying different dynamical processes over networks.

Our main goal is to establish a data-driven framework for the identification and control of network processes. Overall, a systematic and accurate method for identifying, estimating, and controlling spatiotemporal dynamical features of network processes from data is still an open and challenging issue. To fill this gap, we leverage modern machine learning techniques to model network dynamics in the operator-theoretic setting effectively. This way, we are provided with a purely data-driven approach that assumes no knowledge or identification of network parameters and structure. Furthermore, eigenfunctions of the Koopman operator summarize the network processes on low-order manifolds that are evolving linearly. We further establish a tractable framework based on the obtained representation to resolve challenging optimization and control tasks over networks. Finally, we demonstrate the use of the Koopman operator for system identification and control by applying it to a common model of disease spread on networks, the susceptible-infected-susceptible (SIS) model

(Van Mieghem et al., 2009). We show that approximating the complex non-linear dynamics of disease spreading using the Koopman operator allows one to develop optimal control regimes that can quickly mitigate the outbreak, a significant improvement over a uniform intervention strategy. Additionally, we show that effective control over infection rates can be accomplished using an appropriately chosen lower-dimensional representation of the high dimensional Koopman operator.

1.1 Operator-theoretic approach for the analysis of high-dimensional interconnected system

Recent advances in data analysis have shown that many complex systems possess dominant low-dimensional invariant subspaces that are hidden in the high-dimensional ambient space, an underlying structure that enables compact representations for modeling and control (Brunton and Kutz, 2019). To infer these compact representations, operator-theoretic frameworks have been used to address nonlinear relations between subspaces and provide a principled linear embedding for dynamical systems. In particular, the Koopman operator (Mezić, 2005) is an infinite-dimensional linear operator that studies the time evolution of measurement functions (observables) of the system state, and its spectral decomposition completely characterizes the behavior of the nonlinear system (Brunton et al., 2021). One underlying feature that builds up the Koopman success in the study of nonlinear dynamical systems is its finite-dimensional representation connected with finding effective coordinate transformations in which the nonlinear dynamics appear linear. Stated another way, the Koopman operator explores invariant sets of nonlinear observables that evolve linearly. This is different from conventional approaches that commonly rely on linearization and are only locally valid. As such, the Koopman operator can be viewed as an extension of Hartman–Grobman theorem–which is locally valid within a vicinity of hyperbolic stationary points–into the whole basin of attraction. This offers the prospect of prediction, estimation, and control of nonlinear systems by standard methods developed for linear systems.

The Koopman operator sketches a rich global picture of the nonlinear system by characterizing several underlying features (Mauroy et al., 2020)

. For example, Koopman eigenfunctions at eigenvalue

determine the invariant sets, and the eigenfunctions associated with form invariant partitions of dynamics (Mezić, 2005). In fact, such eigenfunctions are connected with ergodic and harmonic quotients that reveal coherent structures in dynamics (Budišić and Mezić, 2012). Level sets of Koopman eigenfunctions also characterize the sets of points (known as isochrons and isostables) that partition the basin of attraction of limit cycles and fixed points, and reduce such dynamics to action–angle coordinates (Mauroy and Mezić, 2012; Mauroy et al., 2013). Mauroy and Mezić (2016) established relationship between the existence of specific eigenfunctions of the Koopman operator and the global stability property of fixed points and limit cycles. Hence the Koopman operator offers a framework better suited for control by circumventing complexities due to nonlinearity and transforming the nonlinear dynamics into globally linear representations (Proctor et al., 2018; Brunton et al., 2021); e.g. Brunton et al. (2017) decomposed chaotic systems into intermittently forced linear systems.

In recent years, three main approaches for numerical computation of the Koopman operator are generalized: Laplace analysis, finite section methods, and Krylov subspace methods (Mezić, 2020). Particularly finite section methods construct an approximate operator acting on a finite-dimensional function subspace. The best known such method is dynamic mode decomposition (DMD) (Schmid, 2010) that features state observables. DMD works based on proper orthogonal decomposition (POD) of high-dimensional linear measurements to extract dynamical patterns that evolve on low-dimensional manifolds (Schmid, 2010; Tu et al., 2014). Therefore, DMD provides a model in terms of the reduced sets of modes and their progression in time. Although DMD has evolved in recent years into a popular approach to extract linear models from linear measurements (Kutz et al., 2016a)

, it inherits the limitations of singular value decomposition and lacks sufficient delicacy to dissect rich nonlinear phenomena and the associated transient dynamics

(Kutz et al., 2016b, Chapter 1).

More recently, the extended DMD (EDMD) (Williams et al., 2015) was developed to account for the limitations of DMD, and employs nonlinear observables to recognize a finite-dimensional invariant subset of the Koopman operator that converges to the Galerkin approximation. Other variants of DMD are developed to represent different dynamical systems or handle numerical challenges, to mention a few: Kernel-DMD (Williams et al., 2015), Hankel-DMD (Arbabi and Mezić, 2017), HAVOK-DMD (Brunton et al., 2017)

, tensor-based DMD

(Fujii and Kawahara, 2019), and recent works that leverage dictionary learning (Li et al., 2017)

and deep learning architectures

(Lusch et al., 2018; Otto and Rowley, 2019; Mardt et al., 2020; Pan and Duraisamy, 2020).

The simplified representation of complex nonlinear dynamics using the Koopman operator provides exciting opportunities to tackle the challenges in controlling nonlinear systems (Brunton et al., 2016). Korda and Mezić (2020) put forward a convex optimization framework for optimal construction of Koopman eigenfunctions for prediction and control. Several extensions are developed for actuated and controlled systems in Williams et al. (2016); Kaiser et al. (2017a); Proctor et al. (2016, 2018). These approaches have recently applied to a wealth of real-world problems like fluid dynamics (Arbabi and Mezić, 2017; Rowley and Dawson, 2017a), power grids (Korda et al., 2018), molecular dynamics (Wehmeyer and Noé, 2018), time series classification (Surana, 2018), robotic systems (Abraham and Murphey, 2019; Bruder et al., 2020), energy consumption in buildings (Boskic et al., 2020; Hiramatsu et al., 2020), traffic (Avila and Mezić, 2020), spacecraft (Chen and Shan, 2020), and hydraulic fracturing operation (Narasingam and Kwon, 2020).

Given that the Koopman operator’s lower-order representation of a complex non-linear system is linear, it is very appealing from the perspective of developing control schemes (Mauroy et al., 2020). Methods for the optimization and on-line control of linear systems are well developed, and the potential to apply these methods productively to the control of complex non-linear systems is a marked advantage of the Koopman operator approach. We want to stress that the Koopman operator captures the dynamics in the whole attraction basin, and thus it can be a more accurate replacement for locally linearized models in these approaches. This is achieved by proper nonlinear measurements in the space of intrinsic coordinates that yield complete information about dynamics. Consequently, the suggested linear predictor is immediately amenable to the range of mature control design techniques, such as optimal control (Brunton et al., 2016; Kaiser et al., 2017b; Das et al., 2018) or switching control (Sootla et al., 2018). In particular, since the Koopman works well for short prediction horizons, it is promising for model predictive control (MPC) that needs prediction over a few steps (Korda and Mezić, 2018). Furthermore, Koopman yields a linear predictor that translates the original nonlinear MPC into a convex optimization problem (Korda and Mezić, 2018)

that is more appealing for numerical treatments. The Koopman operator has also proven successful for resolving control challenges of partial differential equations (PDEs) by mapping the original nonlinear infinite-dimensional control problem into a low-dimensional linear one

(Peitz and Klus, 2019). Further advantages of this approach in resolving MPC problems can be found in optimizing power grids (Korda et al., 2018)

, active learning of dynamics for robotic systems control

(Abraham and Murphey, 2019), and spacecraft altitude stabilization (Chen and Shan, 2020).

1.2 Spreading Processes on Networks

Control of dynamical processes over networks is examined recently for mitigation of networked spreading processes. These spreading processes are often used to model the spread of disease through networks that represent person to person contact or interaction, which suggests that effective methods capable of controlling spreading processes have significant public health implications. Preciado and Jadbabaie (2009) analyze network spectral properties and consider removing nodes and removing links as control inputs to tame an initial viral infection. In this regard, Van Mieghem et al. (2011) study two problems of optimal node removal and optimal link removal and show them to be NP-complete and NP-hard, respectively and propose greedy strategies based on spectral measures.

Worst-case analysis shows that completely removing nodes or links is ineffective–not to mention node/link removal in real world networks is often impractical, illegal, or both–and latter works considered controlling disease spreading processes by preventive resources and promoting corrective policies. These resources and policies do not alter the structure of the network itself, but rather theoretically modulate the susceptibility to infection of individual nodes and/or the probability that infection will spread along specific links.

Preciado et al. (2014) consider both rate-constrained allocation and budget-constrained allocation simultaneously in the framework of geometric programming (Preciado et al., 2014). Nowzari et al. (2017); Watkins et al. (2018); Shakeri et al. (2015) investigate other variants and provide general solutions.

The current main strategies for controlling disease spread on networks are to first, allocate resources over the network components (individuals or their ties) to find the minimum required budget to eradicate the disease at the desired rate and second, mitigating the spread in the fastest possible decay rate by allocating a given fixed budget. The optimization problems have discrete variables, and relaxing them by letting spreading rates take values in a feasible continuous interval aid in numerical solutions.

One major limitation of the previously described network-based allocation approaches is that they are off-line and thereby without feedback from the current state of the network. This means that these allocation strategies are incapable of adapting to changing demands, leading to at best a non-optimal resource allocation, and at worst a failure to control the disease process due to changing network conditions. Optimal control strategies are employed to solve this issue by allowing the control allocation to vary over time (Khanafer and Başar, 2014; Eshghi et al., 2016); this approach is used in Kandhway and Kuri (2016); He and Van Mieghem (2019) for application in virus spreading problems and Dashtbali et al. (2020) for investigating social distancing in response to epidemic using differential games approach.

Watkins et al. (2020) use MPC for optimal containment of epidemic over networks. In particular, and during the recent outbreak of COVID-19, the significance of identifying and intercepting the virus spread over networks is more evident. Carli et al. (2020) study mitigating the outbreak using a multi-region scenario, with the underlying network representing inter-region mobility and propose a model-based MPC where the parameters of the model are fitted based on the collected data from the network of Italian regions.

Despite the vast literature, finding practical approaches for controlling epidemics over complex networks remains an outstanding problem with real-world assumptions and the corresponding uncertainties and unknowns that pose challenges for model-based approaches. We present a summary of the main results here. Firstly, the existing control methods are specialized for deterministic models often approximated from the original stochastic models. Despite establishing connections between the two, these connections are only relevant for simplified cases, and additionally, the connections between control solutions of the two models are unclear. Second, the current approaches admit centralized solutions of computational burden, making them intractable for large networks. Third, conventional methods assume no uncertainties and require complete knowledge of everything, including natural recovery rates, infection rates, state information, and network topologies. Avoiding the above simplifying assumptions while having tools that can handle network and parameter variations are necessary for practical approaches.

1.3 Contribution statement

This work attempts to address the shortcomings of the available approaches discussed above, where we consider identifying and controlling epidemics solely based on our spatio-temporal observations. Our approach is intended to tackle identifying and control of stochastic processes over complex networks using several features; First, our method is purely data-driven and have no assumptions about network parameters, structure, or the underlying dynamical process, and is based on the original stochastic process that produced the data at the outset. Second, to reach an effective data-driven method that is tractable for optimization and control over large networks, we use the latest achievements in machine learning and operator-theoretic to identify a Koopman representation that is interpretable, low-dimensional, and linear. This way, the underlying high-dimensional dynamics is represented through extracting the most effective modes evolving linearly under the networked processes. Third, we revisit the important MPC problem over complex networks and show our proposed approach maps the original high-dimensional nonlinear optimization problem into a low-dimensional convex representation that is well suited for existing numerical approaches and enables real-time softwares.

We organize the paper as follows. Section 2 presents the data-driven Koopman identification of stochastic processes over networks. Section 3 explains the nonlinear MPC problem over networks and its transformation into Koopman MPC. In Section 4, we apply our approach to a Markov process representing the SIS epidemic model over network. Section 5 is devoted to concluding remarks and discussion.

2 Koopman operator

2.1 Problem statement

We consider a controlled discrete-time Markov process taking place over a network as

(1)

where

is the system state vector in the state space

, is the control input, and is an element in the probability space associated with the stochastic dynamics and probability measure . The aim is to obtain a nonlinear embedding mapping (transformation) , with , from the original state space to a subset of that enables us to construct a linear predictor for the expected value of the form

(2)

where , , , and the output is used for prediction and control of the expected value of the state vector given the initial condition . Therefore, we seek a linear system that faithfully represents the original nonlinear dynamical system. The key is finding proper transformation that maps the original state to the lifted state , with (typically) , that evolves linearly–though the number of dimensions is still of concern for practical implementation, we will postpone the discussion on reducing the dimensionality to Section 2.4. The predictor model (2) is amenable to linear control design approaches in the lifted space . Moreover, remains unlifted in (2) allowing direct use of linear constraints on input or/and states in the lifted state. As long as the predictions of (2) are accurate for short time horizons, it is desirable for the use of linear control methodologies (such as linear MPC).

Next we demonstrate that such transformation can be established in the framework of Koopman operator by using data of the triplet form (see Figure 3 for a sketch of the main ideas in the paper).

(a)
(b)
Figure 3: Main idea. (a) Koopman identification. We only collect stochastic (binary) data in the form of snapshots including with . Although we assume no prior knowledge of the system parameters, the underlying dynamics, or network geometry, one can still incorporate available information to enrich the proposed analysis. The Koopman operator is an infinite-dimensional operator that specifies how functions of state evolve, so that it is projected into an (invariant) observables subspace, where the lifting set mapps the original system into a linear one in higher dimension. For effective low-order modeling, a Koopman mode decomposition represents the dynamics in the most effective Koopman modes. (b) The Koopman operator translates the original nonlinear MPC into a convex problem that is amenable for numerical solution. We further use the reduced-order Koopman to decrease the size of the optimization.

2.2 Koopman operator for controlled stochastic processes

Koopman operator embeds the nonlinear dynamics into an appropriate Hilbert space where the dynamics is linear and one can construct the predictor (2). Namely, Koopman is a linear operator of infinite dimension that takes a scalar observable-function belonging to and gives its expected value evolution in the state space. The function space is invariant under the action of the Koopman operator. Additionally, to fully describe the underlying dynamical system, must contain the components of the state vector .

Following Proctor et al. (2018), who generalize the Koopman operator for systems with input, we consider the Koopman operator for stochastic process (1) as

(3)

with and . Including actuation in (3) renders the Koopman family non-autonomous. In the equivalent autonomous formulation, Korda and Mezić (2018) extend the system state to include all control sequences and apply the shift operator to advance the observation of input. The Koopman operator’s spectral properties are directly connected with several geometric characteristics, e.g., invariant sets and partitions (Mezić, 2005) or asymptotic behavior (Mauroy and Mezić, 2012; Mauroy et al., 2013), of the underlying nonlinear system. Moreover, the Koopman modes associated with Koopman eigenfunctions can yield the evolution of observables and the orbits of the system for all initial conditions. In this regard, the Koopman operator gives a complete description of the underlying nonlinear system, provided that the space of observables spans the elements of . If is an eigenfunction of Koopman operator and its eigenvalue, then the spectral problem of Koopman operator reads . If are eigenfunctions of with eigenvalues and , then is eigenfunction of with eigenvalue . This is an implication of the Koopman operator being (generally) infinite-dimensional. Budišić et al. (2012) and Mauroy et al. (2020, Chapter 1) provide a detailed review of Koopman operator properties.

The infinite-dimensional Koopman operator is approximated by its finite-dimensional projection using data-driven approaches that are well suited for this purpose. DMD provides a projection of Koopman operator onto the space of linear observables (Brunton and Kutz, 2019) and EDMD produces more precise approximations by incorporating nonlinear observables that result in a higher-dimensional approximation (see Williams et al. (2015); Mauroy and Goncalves (2019) for deterministic and Wu and Noé (2020) for stochastic systems). Using an extended state vector for systems including input and with the shift operator of a known input profile, Korda and Mezić (2018) argue that finite-dimensional approximation to the operator yields a predictor of the form (2). Specifically, is the projection of onto a subspace of observables spanned by , where the lifting functions only act on the state and the control input remains unlifted222If subspace is invariant under , then all of the eigenvalues and eigenfunctions of are also eigenvalues and eigenfunctions of (Otto and Rowley, 2021). However, since usually such an invariant subspace is not known in advance, with we obtain an approximation for .. As such, the control input appears linearly in the resulting model, which is amenable for control design purposes.

2.3 Finite-dimensional projection using EDMD with control

Recent uses of Koopman operator in control architecture are focused in the context of deterministic systems (Proctor et al., 2018; Korda and Mezić, 2018; Peitz and Klus, 2019; Brunton et al., 2021); in particular, we follow Korda and Mezić (2018) and assume that the data is collected in the form of snapshots as

where . Unlike the original DMD formulation (Rowley et al., 2009; Schmid, 2010), the data need not be sequentially ordered along a single trajectory of (1) as , and we generally use different snapshot triples along different trajectories (corresponding to different initial conditions with generally ). The action of lifting functions is then given as

where is a given dictionary of nonlinear functions. For stochastic processes, we estimate the expected value of directly from experiments.

Then the matrices in (2) are solutions of following optimization problems:

(4)
(5)

We solve 4 and 5 using the normal equations (see Korda and Mezić (2018) for a discussion on the numerical considerations).

2.4 Reduced-order linear representation

Koopman operator embeds the networked nonlinear dynamical system into a linear system but with a higher dimension. One desires a low-dimensional model in practice for fast optimization and real-time control. In the context of linear measurements, the basic DMD scheme extends to include exogenous effects and uses a truncated set of decomposed low-energy modes for order reduction (Proctor et al., 2016)

. Here, we develop this approach to Koopman mode decomposition and establish reduced-order Koopman representation with control. For this purpose, we start by a singular value decomposition

(6)

where is bipartite, i.e., based on the model dimensions. Second, we perform SVD on

(7)

where the truncation value is ; hence, a reduced Koopman model of order is established. The low-dimensional model matrices are computed as

(8)

Thus the Koopman model (2) reduces to the coordinate by replacing , , and with , , and . In this regard, we use the first Koopman modes to construct a low-dimensional network process representation summarized in Algorithm 1. This strategy’s success lies in the existence of a low-dimensional manifold on which the underlying dynamics evolve. Although this manifold depends on the control input, we will illustrate that this approach is sufficiently powerful to effectively capture manifolds for a given input training range. In other words, while our observations are in high-dimension over networks, the actual collective dynamics evolve in low-dimension. The accuracy of this manifold identification improves with narrowing the input training range. Peitz and Klus (2019) partition the input space into a set of subspaces and extract a surrogate model for each range; though the combinatorial nature of this approach prohibits its use on high-dimensional input spaces present in networks.

Inputs: Data matrices , , , and

Outputs: Koopman model matrices , ,

1:Choose a truncation value
2:SVD:
3:Use the number of observables to bipartite
4:SVD: and truncate it for first modes
5:Solve (5) to get
6:, ,
Algorithm 1 Reduced Koopman identification of networked dynamics with inputs

2.5 Choosing an appropriate subset in the function space

Under the assumption of sufficiently rich basis and a large number of functions, one can expect a small approximation error (Williams et al., 2015)

. However, it is an open question: what type of observables will yield the best result for a specific problem. There are three popular choices: Hermite polynomials, radial basis functions (RBFs), and discontinuous spectral elements

(Williams et al., 2015). A partially optimized space of observables can be attained by first selecting a parameterized feature space (Wu and Noé, 2020), e.g. Gaussian RBFs parameterized with the smoothing parameter, and then optimize the associated parameters (the smoothing parameter in case of Gaussian RBFs). Recent investigations of dictionary learning representation by Li et al. (2017); Yeung et al. (2019); Otto and Rowley (2019) are extremely promising. Generally, the physics of the problem, e.g., continuity property and locality, can also be used in determining the choice and the number of basis functions (Chen and Vaidya, 2019).

The dictionary could also include the system state observable (see, e.g., Williams et al. (2015); Korda and Mezić (2018)). This will enhance the linear state reconstruction from observables, i.e., decoding back to the original coordinates. However, it requires at least as many functions as the dimension of the original system state, which is undesirable for large networks evolving in lower intrinsic dimensions. Furthermore, the linear state observable generally lacks high enough resolution to capture complex features of nonlinear systems. Hence, when the full state observable is absent in the Koopman eigenfunction set, forcing the full state observable constraint in the Koopman-invariant subspace will result in overfitting. Moreover, it is impossible to determine a finite-dimensional Koopman-invariant subspace that includes the original state variables for any system with multiple fixed points or any more general attractors (Brunton et al., 2016).

3 Model predictive control for networked processes

The fundamental idea behind the MPC is to measure the current state and design an open-loop optimal control over a finite-time horizon based on a predictive model. For a closed-loop control behavior, the MPC applies only the first portion of the synthesized control during a short time interval. The controller uses the updated state measurements to design the next open-loop control function–and this procedure repeats in the subsequent steps. Therefore MPC yields a closed-loop control approach that concurrently optimizes system performance, handles nonlinearity, holds robustness properties, incorporate input and state constraints with desirable (stability) convergence properties–we refer the reader to Grüne and Pannek (2017) for an exposition. Extension to stochastic systems and the cumulative reasons above lead to the fast growth of the MPC paradigm in the control systems literature.

3.1 Original MPC

For the original stochastic process in (1), we consider a nonlinear MPC problem that at each time step of the closed-loop operation solves the following optimization problem

(9)

where is the current state, is the prediction horizon, denotes the prediction of , is nonlinear scalar valued and nonlinear vector valued functions of state vector expected value, is positive semidefinite, vector , vector , and matrix , with the number of constraints. At each time step, only the first element of the optimal control sequence is applied and the optimization is repeated in the next time step.

The optimization problem (9) is, in general, nonconvex and hard to solve to achieve global optimality, particularly for large networks. Furthermore, we have generally no prior realization of the dynamics for accurate state prediction. Applying the Koopman operator, we transform this problem into a low-order convex optimization problem that is numerically tractable.

3.2 MPC via Koopman

The Koopman operator transforms the original MPC problem (9) into the following convex problem

(10)

where is positive semidefinite and . The matrices define the state constraints, which become linear in lifted space. The optimization problem (10) is convex, i.e, quadratic programming.

Suggested by Korda and Mezić (2018), one can transform the original optimization problem (9) into (10) by constructing the matrices and and the vector using the embeddings (see Section 2) with including in the lifting set the functions and for . Consequently

where , , and are all zeros, all ones, and identity matrices, respectively. Although this canonical approach always returns a linear cost function, if is quadratic, we opt for the freedom of (10) and instead of setting , use the Koopman output matrix to consider quadratic terms in the cost function of (10), thereby reducing the dimension of the lift.

Korda and Mezić (2018) show that the computational complexity of solving the MPC problem (10) can be rendered independent of the dimension of the lifted state by transforming to a dense form. Hence, the computational cost of solving the dense form is comparable to solving a standard linear MPC on the same prediction horizon, with the same number of control inputs and the state space’s dimension equal to rather than . Although we follow this strategy in our numerical programmings for the full-order Koopman MPC, we are less concerned about the dimensionality when using the reduced-order Koopman for MPC since our approach uses a low-dimensional model. Recall that the first modes are used to represent the dynamics in the reduced-order Koopman MPC framework. Therefore, matrices in (10) are replaced with , respectively, and the dimension value is replaced with . Algorithm 2 shows the reduced-order Koopman MPC procedure when each function in the original MPC problem (9) is quadratic in terms of state vector expected value through the positive definite matrix and vector , and each is linear through matrix .

Cost function in (9):

Constraints in (9):

Input: Current system state

Output: Control input

1:For , set , ,
2:Solve the convex optimization (10) for and
3:Only keep and apply the first computed control input
4:Update the current system state and repeat the procedure for the next time step
Algorithm 2 Network MPC via Koopman

4 Application: network SIS epidemic model

We apply our proposed approach to study the networked SIS model–a benchmark to study epidemics over networks. We give a short description of the SIS model in the next subsection but encourage the reader to see Van Mieghem et al. (2009) for a detailed study of dynamical properties.

4.1 Underlying Markov process

The Markov process is defined based on a set of rules describing the possible transitions between different compartments. In the standard network SIS model (Van Mieghem et al., 2009), a susceptible agent adjacent to an infected neighbor experiences infection through a Poisson process with the rate –the independent processes merge, and thus the infection rate increases with the number of infected neighbors. Similarly, an infected agent recovers back to the susceptible state with a Poisson process with the rate . Figure 4 shows the transition diagram where and denote the Susceptible and Infected compartments respectively, and denotes the number of infected agents neighboring agent .

For each node

, consider a binary random variable

, and denote the value of at time , i.e., . The transitions between S and I are modeled via the following continuous-time Markov process:

(11)

where is the joint state of the network, is the value of at time , and is the time step that undergoes a Poisson process.

Figure 4: Transition graph for node with number of infected neighbors in the SIS model.

4.2 Koopman identification

We use the stochastic approach proposed in GEMF by Sahneh et al. (2017) to simulate the SIS Markov process (11) on arbitrary networks. At each time step, the state vector is a discrete binary vector, where the -th element of is if agent is susceptible and if infected. Algorithm 3 describes the data generation and aggregation using stochastic simulators. We choose a number of initial conditions randomly initiated from . For each fixed initial condition, we then simulate (11) for times and average to obtain the expected values of dictionary functions . We record the first and last data of each simulation running for the time period . Therefore, the mapping (1) takes and gives . To learn the system response to a range of inputs, we select a random perturbation vector within a given range and apply that input throughout the corresponding trajectory.

Inputs: , , , ,

Outputs: Data matrices , , and

GEMF simulator takes the current state and the picewise-constant control input and gives the network state at

1:for  do
2:     Randomly generate in including 0 and 1 elements
3:     Randomly generate in satisfying
4:     
5:     for  do
6:         Run the GEMF for and and get
7:         Compute
8:               
9:, ,
Algorithm 3 Data generation and collection

We consider the constant function 1 and Gaussian radial basis functions (RBF) for the dictionary functions. We choose the RBF centers from -means clustering (Bishop, 2006) with a pre-specified value of on the combined data set. Doing so, the RBF centers are directly related to the density of data points, effectively distributing the RBF centers over the cloud of points (Williams et al., 2015).

We adopt the variation of infection rates , , as inputs to the spreading dynamics, letting with indicating a constant (passive) infection rate and the input to agent . In practice, the infection rate can be regulated by putting restrictions on traffic/travel, quarantining subpopulation, distributing masks, vaccinations, or raising awareness about the disease (Nowzari et al., 2016). The control input is constrained by constants and as ; thus the total infection rate remains nonnegative. One may also constraint the total control input for all agents by as .

We considered and examined our approach on three random graph models: randomly generated geometric (Geo), Erdős-Rényi (ER), and Watts-Strogatz (WS) graphs as testbeds each with nodes and a fixed average degree . To conserve space, whenever the results of other models can be interpreted similarly, we present only the results for ER networks. We compare our data-driven approach in predicting the networked dynamics to the epidemic mean-field model at which we provide both graph structure and the SIS model (Sahneh et al., 2013). Note that we are unable to offer a similar comparison for the control of networked processes (Section 3) due to lack of known algorithms able to handle large-scale graphs–even with the knowledge of network structure and nodal dynamics.

We set , , , , and in Algorithm 3. Moreover, we consider averaging the prediction error over randomly chosen initial conditions that are allowed to evolve for a time period . Although this time period is equivalent to one step in Equation (2), i.e. the operator sense, it includes multiple transitions (events) in Equation (11), i.e. GEMF stochastic simulator. Furthermore, choosing a large may incorporate less of the transient pass and even result in better metrics of prediction, but it lacks precision for our control design later.

4.2.1 Constant input

In this section, we consider a network of agents with the same (constant) infection rate. Figure 6 shows the average fraction of infected population for with (randomly chosen) initial infection averaged and the predictions using mean-field model (Sahneh et al., 2013), full-order Koopman (4)-(5), and the reduced Koopman (8). Koopman identification operates successfully in predicting the fraction of infected population, and the performance is comparable with the mean-field theory that is model-based and considers full information of dynamical process, system parameters, and network structure–while our approach does not. Figure 10 illustrates the corresponding predictions for the nodal probability of infection.

We obtain the reduced order model by truncating the full order Koopman model with for ER and WS networks and for Geo network. The number of RBFs for the ER and WS networks is , while it is for Geo network–we use the same values subsequently.

We choose these numbers by investigating the prediction errors in Figure 14. Each point represents the Koopman prediction error over a and 1000 initial conditions, averaged among the prediction errors for all agents. Hence, each error is obtained by computing two averages: one among all agents and one among all initial conditions. Firstly, we observe that the average prediction error for each of ER and WS networks remains almost unchanged by increasing the number of RBFs beyond 200; this number is more considerable for Geo networks. We stop increasing the number of RBFs beyond these values to avoid the increase of complexity and thus overfitting. Second, the evaluation of prediction error for reduced Koopman models in Figure (b)b illustrates that increasing the number of Koopman modes beyond 5 for ER and WS networks and 10 for Geo network has a negligible effect on error reduction. Consequently, while Koopman embeds the stochastic nonlinear system into a high-dimensional linear model, e.g., SIS model over a network of 100 agents may be embedded into a dimension of 200 or 300, its mode decomposition can yield a much smaller, but effective, representation. The considered networks with 100 agents are successfully represented by linear models of 5 and 10 states (fifth and tenth order linear models). This implies exploring the low-dimensional manifold that describes the underlying dynamics is a promising approach for challenges of optimization and control over networks.

We further examine average errors for different reproduction numbers obtained for different corresponding infection rates in Figure (c)c. We observe that the average nodal error reduces with increasing the reproduction number . For large reproduction numbers, connections and interactions between agents grow stronger and the overall network operates more uniformly. This uniformity makes the network more predictable. Figure 14 also signifies that the prediction in ER and WS networks is more effortless than Geo networks; thus, we can represent them by lower-order models– we attribute this to slower mixing dynamics and larger diameter in spatial graphs.

(a)
Figure 6: Fraction of infected population for ER network.
(a)
(b)
(c)
Figure 10: Probability of each individual infection corresponding to Figure 6. The figure shows from left the infection probabilities computed by mean field, full and reduced Koopman predictions.
(a)
(b)
(c)
Figure 14: Different average errors computed using 1000 randomly generated initial conditions.

4.2.2 Varying input

We examine the efficacy of the proposed approach when the infection rate varies, i.e., where is constant for all agents, and is the (heterogeneous) input vector to the system. As an example of conditions under which the infection rate is both time-varying and heterogeneous, we examine the Koopman model prediction performance in response to the oscillatory time-varying input shown in Figure 15. We train the Koopman model for two different input ranges and and compare the results. Figure 18 illustrates that the prediction error increases by widening the training range, highlighting the importance of input training range in the identified model accuracy.

To further quantify our results, we probe two types of errors as metrics of performance. First, we compute the relative error when we apply a constant homogeneous input in the model trained for a given input range. The corresponding results are shown in Figure 23 showing the average prediction errors for the trained Koopman models after a time period . Figure 23 indicates that the average relative error increases by approaching the boundaries of the training range. Comparing the errors corresponding to ranges and reveals that narrower input training range, i.e. a Koopman model trained in the range , generally produces less error, thus more accurate model (see Figure 18). Next, while the average error for most inputs in Figure 23 is larger when using reduced Koopman, we observe an exception for values of corresponding to near 1 when the model is trained for the broader range . This improvement is a result of balanced truncation of dynamics in the reduced Koopman model and less overfitting compared to the full Koopman model (Rowley and Dawson, 2017b). Second, we consider the average error for heterogeneous inputs shown in Table 1; in this case, the error is averaged among 1000 trajectories corresponding to 1000 randomly generated initial conditions and control input vectors. Although the prediction error increases by more broader training range or further reducing the Koopman model, the full Koopman model may still experience overfitting (Figure 23). Then, the reduced Koopman’s proper mode decomposition results in more accurate prediction by refining and filtering the identified model’s noisy part, hence preventing overfitting.

Figure 15: Time varying heterogeneous infection rate input.
(a)
(b)
Figure 18: Prediction over ER network for varying input trained for (left) and (right).
(a)
(b)
(c)
(d)
Figure 23: Average prediction error computed by 1000 randomly generated initial conditions for different homogeneous constant inputs in the ER network. The first and second rows show the result for the training ranges and , respectively, by the left column representing the full Koopman and right the reduced Koopman.
Average error %
Training range
Koopman type Full Reduced Full Reduced
Network ER 11.55 24.83 26.48 65.40
Geo 12.50 28.65 25.73 59.65
WS 11.39 24.72 26.04 65.80
Table 1: Average error for different input training ranges

4.3 Koopman MPC for networked SIS

4.3.1 Limited budget problem

In this section, we consider a linear cost function as in (9), where is the all ones vector, thus minimizing the fraction of the infected population. Instead of explicitly minimizing the control expenditure, we limit the total control action at each time step by a budget by enforcing the constraint . Furthermore, we assume the control input at each node is limited as , so that the infection rate of each node can be neither negative nor increased beyond the initial value . We impose no state constraints, i.e., . The problem becomes an optimal assignment of resources to mitigate the epidemic with a prediction horizon .

We assume 90 percent of the population is initially infected. For comparison, we present the results of another scenario where the total available budget is distributed uniformly among all agents. For simulation, we set and . Figure 25 illustrates a typical system response where on the one hand, a uniform resource allocation fails to mitigate epidemic by driving the system into an endemic state, and on the other hand, MPC via Koopman approaches operates successfully to halt the epidemic throughout the network. Both full and reduced Koopman models perform almost equally, with a slight advantage with full Koopman MPC, indicating the reduced Koopman MPC is nearly as effective as the full Koopman MPC, though being of significantly lower order. Figure 28 shows the control distributions and the nodes’ Katz centrality. The optimal control strategy in this limited budget case, with linear MPC cost function, is constant over time and distributes the total budget to nodes with the most centrality measures. Thus, the most central nodes are assigned maximum control action while the others with lesser centrality measures are left without action ().

This strategy is significant for practical use, e.g., if the control action is to vaccinate the agents, the resource allocation policy recommends vaccinating only the most central agents. We emphasize that control architecture’s assessment of the resource allocation strategy and identifying the importance of nodes is accomplished exclusively by nonlinear mode decomposition of the available data, without any knowledge of system parameters or network geometry. Table 2 compares the average new cases of infection after applying control, obtained by averaging among trajectories of 1000 randomly selected initial conditions. We observe fewer infection cases using the full Koopman model for ER and Geo networks in the limited budget problem. However, in the WS network, the reduced-order Koopman induces fewer infection cases in the limited budget problem; an improvement by proper mode decomposition in the reduced model that reduces overfitting. Table 2 confirms controlling the epidemic in Geo network is more complicated than in ER and WS.

(a)
Figure 25: Fraction of infected population under MPC with limited total budget in ER network.
(a)
(b)
Figure 28: Control distribution in Figure 25. The left figure shows the control distribution, and the right shows the corresponding control input differences between full and reduced Koopman MPC.
Average transition
MPC strategy Limited budget Minimum cost
Koopman type Full Reduced Full Reduced
Network ER 29.40 33.55 24.39 35.10
Geo 92.60 126.05 30.51 86.21
WS 64.27 48.11 33.80 45.13
Table 2: Average number of transitions SI, i.e. number of infections, after applying MPC in a network with nodes

4.3.2 Minimum cost problem

In the previous subsection, the control action was concluded to be constant with time for the linear cost function. To reach a time varying resource allocation strategy, we consider a quadratic cost function as in (9), where , and is positive semidefinite. Although we consider no constraint directly imposed on the total available budget, the control action of each node is still limited as , and it also contributes to cost function by choosing nonzero values for and in (9). There is no constraint on system state too, . Consequently, our aim is to mitigate an existing epidemic while minimizing the costs.

For numerical values we consider , , , and , where

denotes identity matrix of size

, and the all ones vector of size . Figure 30 shows a typical system response where we observe Koopman models’ success in mitigating the epidemic, something that is not possible with uniform resource distribution. Moreover, while the full Koopman model performs slightly better, the reduced Koopman model performance is comparable. Figure 33 indicates the control allocation of the full Koopman model for times and . The reduced Koopman model decides qualitatively similar control actions (we avoid repeating similar results in the paper).

Figure 33 illustrates that the MPC effort initially concentrates mainly on reducing the epidemic by increasing and saturating the control actions near the maximum value 1. Hence, only some nodes of small centrality measures are not assigned their maximum possible control (see Figure 33 on left). With time passing and the epidemic decaying, the MPC strategy turns to give more priority to minimum control action corresponding to less budget, so that applied control inputs decrease significantly (see Figure 33 on right). Figure 35 further illustrates this by referring to time variations of the total control action, where we also plotted the MPC cost function values during the epidemics. For total control action, Figure 35 verifies a nonincreasing pattern where the reduced Koopman often induces more control effort than the full Koopman except for the beginning, i.e., . Figure 35 also shows the minimum cost function value by full Koopman is smaller than that of the reduced order. Finally, we observe for the minimum cost problem in Table 2 that, after applying MPC, the full Koopman model results in fewer new infection cases than reduced one does.

(a)
Figure 30: Fraction of infected population under MPC with minimum cost in ER network.
(a)
(b)
Figure 33: Control distribution by full Koopman MPC in Figure 30 at (left) and (right).
(a)
Figure 35: Total control applied (solid lines) and cost function value (dash-dotted lines) in Figure 30. Dark colors show the result of full Koopman and the pale show reduced Koopman.

5 Conclusion and discussion

Modern data-driven techniques yield promising tools to identify, optimize, and control of dynamical processes over complex networks. In this work, we use operator-theoretic methods to characterize stochastic nonlinear dynamics and represent them into low-dimensional linear forms. This is beneficial to accurately predict complex networked processes through interpretable models that can be effectively utilized to reformulate the existing optimization and control problems on networks. This approach converts the original network MPC, a nonlinear optimization problem, into a convex problem with fewer decision variables. As a specific application of the proposed method, we concluded its power to predict and control epidemic spread over networks. Among different random graphs studied, the random geometric networks (Geo) showed more complicated features for identification and control. That is, the Geo network needs more effort compared to ER and WS networks. This is attributed to slower mixing dynamics and larger diameter in spatial graphs.

Optimization of network dynamics has a long-standing history due to its paramount importance in areas as diverse as engineering, physics, biology, the social sciences, computer science, and economics. However, this vast literature still fails to achieve a comprehensive solution for challenging features originating from nonlinear phenomena, stochastic processes, large system scale, and complex network structures. The control inputs differ from strategies adopted in (Preciado and Jadbabaie, 2009; Van Mieghem et al., 2011) that considered removing nodes and/or removing links that lead to combinatorial NP-hard problems, and similar to (Preciado et al., 2014), by distributing resources that promote corrective behaviors in terms of continuous properties of nodes. Moreover, instead of off-line strategies in Preciado et al. (2014); Shakeri et al. (2015); Nowzari et al. (2017); Watkins et al. (2018), our approach is an online control strategy that monitors the system state. Therefore it provides feedback and thus possesses robustness properties against system uncertainties and exogenous disturbances, all when no knowledge of the network structure or parameters is provided.

While optimal control strategies are recently employed to solve various online control problems over networks (Khanafer and Başar, 2014; Eshghi et al., 2016; Kandhway and Kuri, 2016; He and Van Mieghem, 2019; Dashtbali et al., 2020; Watkins et al., 2020), they fall short in practice. Specifically, they are based on unrealistically simplified deterministic models, have a computational burden that is intractable for large networks, and require complete knowledge of network geometry and dynamical parameters. Our proposed approach leverages the advantages of operator-theoretic methods (Klus et al., 2018) to treat an original problem within a framework where the fundamental theories and practices are well developed. Furthermore, we utilize modern data-driven techniques to identify such operators for network dynamics. The success of our proposed strategy lies in the topological conjugacy (Lan and Mezić, 2013) that allows us to exploit the linearity of Koopman dynamics and tame the original nonlinear dynamics. Unlike local linearization approaches (Khanafer and Başar, 2014), that are valid within a (small) neighborhood of invariant sets, Koopman eigenfunctions extend the validity of the linear model into the whole basin of attraction. Furthermore we offer computationally tractable solutions, in contrary to recent works that use nonlinear models for more accurate and stable control (He and Van Mieghem, 2019; Watkins et al., 2020) with recalcitrant nonlinear programmings with requirements about the exact knowledge of underlying dynamics, model parameters, and network geometry. Hence, the importance of this work remains in establishing an approach that does not ask for often-unknown network information over and enables practical linear control strategies that are valid over the state space.

Model reductions in networks often are based on graph clustering and aggregation (Cheng and Scherpen, 2021) with assumptions on network structures. However, network intricacies and interconnections give rise to dynamics that evolve on low-order manifolds, and operator-theoretic techniques can capture these manifolds (Klus et al., 2018) efficiently. The approximation of Koopman operator using EDMD with balanced truncation represents the nonlinear dynamics of low-order manifolds by considering the most effective Koopman eigenfunctions. We use such low-order linear models to offer a tractable framework for significant control problems, such as MPC, over large networks.

In what follows, we acknowledge and discuss the limitations and possible extensions of our approach. Although EDMD is a simple approach, it only approximates the Koopman operator if the observables library is chosen appropriately. Practices such as deep learning techniques are proposed to improve this choice to better asses invariant Koopman subspaces (Li et al., 2017; Lusch et al., 2018; Otto and Rowley, 2019; Mardt et al., 2020; Pan and Duraisamy, 2020). Therefore future inclusions of these techniques may result in more accurate prediction and control of network processes. Moreover, we assume no prior knowledge of the system dynamics, but when possible, physics-informed machine learning techniques (Karniadakis et al., 2021; Pan and Duraisamy, 2020) can reduce data volume and reach better accuracy, faster training, and improved generalization. On the other hand, if we have information on the network geometry, we can utilize the sparse reduced-order modeling approach to full-state estimation (Loiseau et al., 2018) by only monitoring the states of a few numbers of agents. This will yield a more practical version of this work, since we are not always provided with full measurement of the network state. Another extension of this work can be made by multi-scale identification of underlying dynamics by collecting data of agent groups instead of individual agents. Although by the group-based strategy we only estimate the state in each group state, not each agent, it is effective particularly over large networks by significantly reducing the computational burden (Moon et al., 2021).

References

  • Abraham and Murphey (2019) Abraham, I. and T. D. Murphey (2019). Active learning of dynamics for data-driven control using koopman operators. IEEE Transactions on Robotics 35(5), 1071–1083.
  • Arbabi and Mezić (2017) Arbabi, H. and I. Mezić (2017, Dec). Study of dynamics in post-transient flows using koopman mode decomposition. Phys. Rev. Fluids 2, 124402.
  • Arbabi and Mezić (2017) Arbabi, H. and I. Mezić (2017). Ergodic theory, dynamic mode decomposition, and computation of spectral properties of the koopman operator. SIAM Journal on Applied Dynamical Systems 16(4), 2096–2126.
  • Avila and Mezić (2020) Avila, A. M. and I. Mezić (2020). Data-driven analysis and forecasting of highway traffic dynamics. Nature Communications 11, 2090.
  • Bishop (2006) Bishop, C. (2006). Pattern Recognition and Machine Learning. Springer-Verlag.
  • Boskic et al. (2020) Boskic, L., C. N. Brown, and I. Mezić (2020). Koopman mode analysis on thermal data for building energy assessment. Advances in Building Energy Research 0(0), 1–15.
  • Bruder et al. (2020) Bruder, D., X. Fu, R. B. Gillespie, C. D. Remy, and R. Vasudevan (2020). Data-driven control of soft robots using koopman operator theory. IEEE Transactions on Robotics, 1–14.
  • Brunton et al. (2021) Brunton, S., M. Budisic, E. Kaiser, and J. Kutz (2021). Modern koopman theory for dynamical systems. ArXiv abs/2102.12086.
  • Brunton et al. (2017) Brunton, S. L., B. W. Brunton, J. L. Proctor, E. Kaiser, and J. N. Kutz (2017). Chaos as an intermittently forced linear system. Nature Communications 8, 19.
  • Brunton et al. (2016) Brunton, S. L., B. W. Brunton, J. L. Proctor, and J. N. Kutz (2016, 02). Koopman invariant subspaces and finite linear representations of nonlinear dynamical systems for control. PLOS ONE 11(2), 1–19.
  • Brunton and Kutz (2019) Brunton, S. L. and J. N. Kutz (2019). Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control. Cambridge University Press.
  • Budišić and Mezić (2012) Budišić, M. and I. Mezić (2012). Geometry of the ergodic quotient reveals coherent structures in flows. Physica D: Nonlinear Phenomena 241(15), 1255–1269.
  • Budišić et al. (2012) Budišić, M., R. Mohr, and I. Mezić (2012). Applied koopmanism. Chaos: An Interdisciplinary Journal of Nonlinear Science 22(4), 047510.
  • Carli et al. (2020) Carli, R., G. Cavone, N. Epicoco, P. Scarabaggio, and M. Dotoli (2020). Model predictive control to mitigate the covid-19 outbreak in a multi-region scenario. Annual Reviews in Control.
  • Chen and Shan (2020) Chen, T. and J. Shan (2020). Koopman-operator-based attitude dynamics and control on so(3). Journal of Guidance, Control, and Dynamics 43(11), 2112–2126.
  • Chen and Vaidya (2019) Chen, Y. and U. Vaidya (2019). Sample complexity for nonlinear stochastic dynamics. In 2019 American Control Conference (ACC), pp. 3526–3531.
  • Cheng and Scherpen (2021) Cheng, X. and J. Scherpen (2021). Model reduction methods for complex network systems. Annual Review of Control, Robotics, and Autonomous Systems 4(1), 425–453.
  • Das et al. (2018) Das, A. K., B. Huang, and U. Vaidya (2018). Data-driven optimal control using transfer operators. In 2018 IEEE Conference on Decision and Control (CDC), pp. 3223–3228.
  • Dashtbali et al. (2020) Dashtbali, M., A. Malek, and M. Mirzaie (2020). Optimal control and differential game solutions for social distancing in response to epidemics of infectious diseases on networks. Optimal Control Applications and Methods 41(6), 2149–2165.
  • Eshghi et al. (2016) Eshghi, S., M. H. R. Khouzani, S. Sarkar, and S. S. Venkatesh (2016). Optimal patching in clustered malware epidemics. IEEE/ACM Transactions on Networking 24(1), 283–298.
  • Fujii and Kawahara (2019) Fujii, K. and Y. Kawahara (2019). Dynamic mode decomposition in vector-valued reproducing kernel hilbert spaces for extracting dynamical structure among observables. Neural Networks 117, 4–103.
  • Grüne and Pannek (2017) Grüne, L. and J. Pannek (2017). Nonlinear model predictive control (Second ed.). Springer.
  • He and Van Mieghem (2019) He, Z. and P. Van Mieghem (2019). Optimal induced spreading of sis epidemics in networks. IEEE Transactions on Control of Network Systems 6(4), 1344–1353.
  • Hiramatsu et al. (2020) Hiramatsu, N., Y. Susuki, and A. Ishigame (2020, Aug). Koopman mode decomposition of oscillatory temperature field inside a room. Phys. Rev. E 102, 022210.
  • Kaiser et al. (2017a) Kaiser, E., J. N. Kutz, and S. L. Brunton (2017a, November). Data-driven discovery of Koopman eigenfunctions for control. In APS Division of Fluid Dynamics Meeting Abstracts, APS Meeting Abstracts, pp. M27.006.
  • Kaiser et al. (2017b) Kaiser, E., J. N. Kutz, and S. L. Brunton (2017b, nov). Data-driven discovery of Koopman eigenfunctions for control. In APS Division of Fluid Dynamics Meeting Abstracts, APS Meeting Abstracts, pp. M27.006.
  • Kandhway and Kuri (2016) Kandhway, K. and J. Kuri (2016). Campaigning in heterogeneous social networks: Optimal control of si information epidemics. IEEE/ACM Transactions on Networking 24(1), 383–396.
  • Karniadakis et al. (2021) Karniadakis, G. E., I. G. Kevrekidis, L. Lu, P. Perdikaris, S. Wang, and L. Yang (2021). Physics-informed machine learning. Nature Reviews Physics 3, 422–440.
  • Khanafer and Başar (2014) Khanafer, A. and T. Başar (2014). An optimal control problem over infected networks. In Proceedings of the International Conference of Control, Dynamic Systems, and Robotics, pp. paper 125, pp. 1–6.
  • Klus et al. (2018) Klus, S., F. Nüske, P. Koltai, H. Wu, I. Kevrekidis, C. Schütte, and F. Noé (2018). Data-driven model reduction and transfer operator approximation. Journal of Nonlinear Science 28(4), 985–1010.
  • Korda and Mezić (2018) Korda, M. and I. Mezić (2018). Linear predictors for nonlinear dynamical systems: Koopman operator meets model predictive control. Automatica 93, 149 – 160.
  • Korda and Mezić (2020) Korda, M. and I. Mezić (2020). Optimal construction of koopman eigenfunctions for prediction and control. IEEE Transactions on Automatic Control 65(12), 5114–5129.
  • Korda et al. (2018) Korda, M., Y. Susuki, and I. Mezić (2018). Power grid transient stabilization using koopman model predictive control. IFAC-PapersOnLine 51(28), 297–302. 10th IFAC Symposium on Control of Power and Energy Systems CPES 2018.
  • Kutz et al. (2016a) Kutz, J. N., S. L. Brunton, B. W. Brunton, and J. L. Proctor (2016a). Dynamic Mode Decomposition: Data-Driven Modeling of Complex Systems. SIAM.
  • Kutz et al. (2016b) Kutz, J. N., S. L. Brunton, B. W. Brunton, and J. L. Proctor (2016b). Dynamic mode decomposition: data-driven modeling of complex systems. SIAM.
  • Lan and Mezić (2013) Lan, Y. and I. Mezić (2013). Linearization in the large of nonlinear systems and koopman operator spectrum. Physica D: Nonlinear Phenomena 242(1), 42–53.
  • Li et al. (2017) Li, Q., F. Dietrich, E. M. Bollt, and I. G. Kevrekidis (2017). Extended dynamic mode decomposition with dictionary learning: A data-driven adaptive spectral decomposition of the koopman operator. Chaos: An Interdisciplinary Journal of Nonlinear Science 27(10), 103111.
  • Loiseau et al. (2018) Loiseau, J.-C., B. R. Noack, and S. L. Brunton (2018). Sparse reduced-order modelling: sensor-based dynamics to full-state estimation. Journal of Fluid Mechanics 844, 459–490.
  • Lusch et al. (2018) Lusch, B., J. Kutz, and S. Brunton (2018). Deep learning for universal linear embeddings of nonlinear dynamics. Nature Communications 9, 4950.
  • Mardt et al. (2020) Mardt, A., L. Pasquali, F. Noé, and H. Wu (2020). Deep learning markov and koopman models with physical constraints. Proceedings of Machine Learning Research 107, 451––475.
  • Mauroy and Goncalves (2019) Mauroy, A. and J. Goncalves (2019). Koopman-based lifting techniques for nonlinear systems identification. IEEE Transactions on Automatic Control Early Access, 1–16.
  • Mauroy et al. (2020) Mauroy, A., I. Mezic, and Y. Susuki (2020). The Koopman Operator in Systems and Control: Concepts, Methodologies, and Applications. Springer.
  • Mauroy and Mezić (2012) Mauroy, A. and I. Mezić (2012). On the use of fourier averages to compute the global isochrons of (quasi)periodic dynamics. Chaos: An Interdisciplinary Journal of Nonlinear Science 22(3), 033112.
  • Mauroy and Mezić (2016) Mauroy, A. and I. Mezić (2016). Global stability analysis using the eigenfunctions of the koopman operator. IEEE Transactions on Automatic Control 61(11), 3356–3369.
  • Mauroy et al. (2013) Mauroy, A., I. Mezić, and J. Moehlis (2013). Isostables, isochrons, and koopman spectrum for the action–angle representation of stable fixed point dynamics. Physica D: Nonlinear Phenomena 261, 19–30.
  • Mezić (2005) Mezić, I. (2005). Spectral properties of dynamical systems, model reduction and decompositions. Nonlinear Dynamics 41, 309–325.
  • Mezić (2020) Mezić, I. (2020). On numerical approximations of the koopman operator. arXiv:2009.05883v1, 1–24.
  • Moon et al. (2021) Moon, S. A., F. D. Sahneh, and C. Scoglio (2021). Group-based general epidemic modeling for spreading processes on networks: Groupgem. IEEE Transactions on Network Science and Engineering 8(1), 434–446.
  • Narasingam and Kwon (2020) Narasingam, A. and J. S.-I. Kwon (2020). Application of koopman operator for model-based control of fracture propagation and proppant transport in hydraulic fracturing operation. Journal of Process Control 91, 25–36.
  • Nowzari et al. (2016) Nowzari, C., V. M. Preciado, and G. J. Pappas (2016). Analysis and control of epidemics: A survey of spreading processes on complex networks. IEEE Control Systems Magazine 36(1), 26–46.
  • Nowzari et al. (2017) Nowzari, C., V. M. Preciado, and G. J. Pappas (2017). Optimal resource allocation for control of networked epidemic models. IEEE Transactions on Control of Network Systems 4(2), 159–169.
  • Otto and Rowley (2019) Otto, S. E. and C. W. Rowley (2019).

    Linearly recurrent autoencoder networks for learning dynamics.

    SIAM Journal on Applied Dynamical Systems 18(1), 558–593.
  • Otto and Rowley (2021) Otto, S. E. and C. W. Rowley (2021). Koopman operators for estimation and control of dynamical systems. Annual Review of Control, Robotics, and Autonomous Systems 4(1), null.
  • Pan and Duraisamy (2020) Pan, S. and K. Duraisamy (2020). Physics-informed probabilistic learning of linear embeddings of nonlinear dynamics with guaranteed stability. SIAM Journal on Applied Dynamical Systems 19(1), 480–509.
  • Peitz and Klus (2019) Peitz, S. and S. Klus (2019). Koopman operator-based model reduction for switched-system control of pdes. Automatica 106, 184 – 191.
  • Prasse and Mieghem (2020) Prasse, B. and P. V. Mieghem (2020). Predicting dynamics on networks hardly depends on the topology. arXiv:2005.14575v1, 1––24.
  • Preciado and Jadbabaie (2009) Preciado, V. M. and A. Jadbabaie (2009). Spectral analysis of virus spreading in random geometric networks. In Proceedings of the 48h IEEE Conference on Decision and Control (CDC) held jointly with 2009 28th Chinese Control Conference, pp. 4802–4807.
  • Preciado et al. (2014) Preciado, V. M., M. Zargham, C. Enyioha, A. Jadbabaie, and G. J. Pappas (2014). Optimal resource allocation for network protection against spreading processes. IEEE Transactions on Control of Network Systems 1(1), 99–108.
  • Proctor et al. (2016) Proctor, J. L., S. L. Brunton, and J. N. Kutz (2016). Including inputs and control within equation-free architectures for complex systems. The European Physical Journal Special Topics 225, 2413––2434.
  • Proctor et al. (2018) Proctor, J. L., S. L. Brunton, and J. N. Kutz (2018). Generalizing koopman theory to allow for inputs and control. SIAM Journal on Applied Dynamical Systems 17(1), 909–930.
  • Rowley and Dawson (2017a) Rowley, C. W. and S. T. Dawson (2017a). Model reduction for flow analysis and control. Annual Review of Fluid Mechanics 49(1), 387–417.
  • Rowley and Dawson (2017b) Rowley, C. W. and S. T. Dawson (2017b). Model reduction for flow analysis and control. Annual Review of Fluid Mechanics 49, 387–417.
  • Rowley et al. (2009) Rowley, C. W., I. Mezić, S. Bagheri, P. Schlatter, and D. S. Henningson (2009). Spectral analysis of nonlinear flows. Journal of Fluid Mechanics 641, 115–127.
  • Sahneh et al. (2013) Sahneh, F. D., C. Scoglio, and P. Van Mieghem (2013). Generalized epidemic mean-field model for spreading processes over multilayer complex networks. IEEE/ACM Transactions on Networking (TON) 21(5), 1609–1620.
  • Sahneh et al. (2017) Sahneh, F. D., A. Vajdi, H. Shakeri, F. Fan, and C. Scoglio (2017). Gemfsim: A stochastic simulator for the generalized epidemic modeling framework. Journal of Computational Science 22, 36––44.
  • Schmid (2010) Schmid, P. J. (2010). Dynamic mode decomposition of numerical and experimental data. Journal of Fluid Mechanics 656, 5–28.
  • Shakeri et al. (2015) Shakeri, H., F. D. Sahneh, C. Scoglio, P. Poggi-Corradini, and V. M. Preciado (2015). Optimal information dissemination strategy to promote preventive behaviors in multilayer epidemic networks. Mathematical Biosciences & Engineering 12(3), 609.
  • Sootla et al. (2018) Sootla, A., A. Mauroy, and D. Ernst (2018). Optimal control formulation of pulse-based control using koopman operator. Automatica 91, 217 – 224.
  • Surana (2018) Surana, A. (2018). Koopman operator framework for time series modeling and analysis. Journal of Nonlinear Science 30, 1973––2006.
  • Tu et al. (2014) Tu, J. H., C. W. Rowley, D. M. Luchtenburg, S. L. Brunton, and J. N. Kutz (2014). On dynamic mode decomposition: Theory and applications. Journal of Computational Dynamics 1(2), 391–421.
  • Van Mieghem et al. (2009) Van Mieghem, P., J. Omic, and R. Kooij (2009). Virus spread in networks. IEEE/ACM Transactions on Networking (TON) 17(1), 1–14.
  • Van Mieghem et al. (2011) Van Mieghem, P., D. Stevanović, F. Kuipers, C. Li, R. van de Bovenkamp, D. Liu, and H. Wang (2011, Jul). Decreasing the spectral radius of a graph by link removals. Phys. Rev. E 84, 016101.
  • Wang et al. (2016) Wang, W.-X., Y.-C. Lai, and C. Grebogi (2016). Data based identification and prediction of nonlinear and complex dynamical systems. Physics Reports 644, 1 – 76.
  • Watkins et al. (2020) Watkins, N. J., C. Nowzari, and G. J. Pappas (2020). Robust economic model predictive control of continuous-time epidemic processes. IEEE Transactions on Automatic Control 65(3), 1116–1131.
  • Watkins et al. (2018) Watkins, N. J., C. Nowzari, V. M. Preciado, and G. J. Pappas (2018). Optimal resource allocation for competitive spreading processes on bilayer networks. IEEE Transactions on Control of Network Systems 5(1), 298–307.
  • Wehmeyer and Noé (2018) Wehmeyer, C. and F. Noé (2018). Time-lagged autoencoders: Deep learning of slow collective variables for molecular kinetics. The Journal of Chemical Physics 148(24), 241703.
  • Williams et al. (2016) Williams, M. O., M. S. Hemati, S. T. Dawson, I. G. Kevrekidis, and C. W. Rowley (2016). Extending data-driven koopman analysis to actuated systems. IFAC-PapersOnLine 49(18), 704 – 709. 10th IFAC Symposium on Nonlinear Control Systems NOLCOS 2016.
  • Williams et al. (2015) Williams, M. O., I. G. Kevrekidis, and C. W. Rowley (2015). A data–driven approximation of the koopman operator: Extending dynamic mode decomposition. Journal of Nonlinear Science 25, 1307–1346.
  • Williams et al. (2015) Williams, M. O., C. W. Rowley, and I. G. Kevrekidis (2015). A kernel-based method for data-driven koopman spectral analysis. Journal of Computational Dynamics 2(2158-2491-2015-2-247), 247.
  • Wu and Noé (2020) Wu, H. and F. Noé (2020). Variational approach for learning markov processes from time series data. Journal of Nonlinear Science 30, 23–66.
  • Yeung et al. (2019) Yeung, E., S. Kundu, and N. Hodas (2019). Learning deep neural network representations for koopman operators of nonlinear dynamical systems. In 2019 American Control Conference (ACC), pp. 4832–4839.