1 Introduction
If the true dynamics of a system are uncertain, Adaptive Control strategies have been applied for meeting control objectives and ensuring system stability [krstic1995nonlinear, ioannou1996robust, sastry2011adaptive], typically under no output and input constraints. The uncertainty in these systems can be primarily attributed to two factors: model uncertainty (e.g. modeling mismatch and inaccuracies), and exogenous disturbances (e.g. sensor noise). More recently, utilizing tools from classical Adaptive Control, Adaptive Model Predictive Control (MPC) [tanaskovic2014adaptive, lorenzenAutomaticaAMPC, bujarbaruahAdapFIR, kohlerNonlAMPC] has established itself as a promising approach for control of uncertain systems subject to input and output constraints. For linear systems specifically, the literature on Adaptive MPC has extensively focused on either robust or probabilistic satisfaction of such imposed constraints on the system, using either statespace or inputoutput modeling.
In [soloperto2018learning, bujarArxivAdap] additive model uncertainties are considered with known system matrices, and imposed state and input constraints are robustly satisfied for all such realizable uncertainties. In [lorenzenAutomaticaAMPC, hernandez2018stabilizing, kohler2019linear] robust stateinput constraint satisfaction is extended to include both unknown system dynamics matrices and additive disturbances. The approach introduced in [bujarArxivAdap] is also suited for satisfaction of probabilistic chance constraints on system states. Furthermore, the work in [hewing2017cautious, koller2018learning, ostafew2014learning] uses Gaussian Process (GP) regression for realtime learning of an uncertain model and satisfies probabilistic state constraints with a traditional stochastic MPC [farina2016stochastic] controller. Although such statespace modeling based Adaptive MPC controllers have proven to be effective, they involve construction of positive invariant sets [kolmanovsky1998theory, blanchini1999set], which can become computationally cumbersome. As a consequence, inputoutput modeling of systems has been opted in literature for proposing computationally efficient Adaptive MPC algorithms for certain applications (e.g. for stable, slow systems).
Adaptive MPC algorithms using inputoutput modeling of the system are presented in [tanaskovic2014adaptive, 2017arXiv171207548T, bujarbaruahAdapFIR, FIRRoySmith19], both for robust and probabilistic satisfaction of imposed inputoutput constraints. The works of [tanaskovic2014adaptive, 2017arXiv171207548T, FIRRoySmith19] deal with modeling errors in the Finite Impulse Response (FIR) domain, in the presence of a bounded additive disturbance, and prove recursive feasibility and stability [borrelli2017predictive, Chapter 12] of the proposed robust Adaptive MPC approaches. These ideas are extended in [bujarbaruahAdapFIR], where a recursively feasible adaptive stochastic MPC algorithm is presented, demonstrating satisfaction of probabilistic output constraints and hard input constraints. The proposed approach in [bujarbaruahAdapFIR] obtains a better performance compared to [tanaskovic2014adaptive]
measured in terms of closedloop cost, owing to the allowance of output constraint violations with a certain (low) probability.
In this paper, we build on the work of [tanaskovic2014adaptive, bujarbaruahAdapFIR], and propose an Adaptive Stochastic MPC algorithm that considers probabilistic output constraints and hard input constraints for a Multi Input Multi Output (MIMO) system. Similar to [bujarbaruahAdapFIR]
, we consider an uncertain FIR model of the system that is subject to bounded disturbances with known mean and variance. The support for the set of all possible models, which we call the Feasible Parameter Set (FPS), is adapted at each timestep using a set membership based approach. In contrast to previous work
[bujarbaruahAdapFIR], we additionally consider that the impulse response parameters corresponding to each output are sparse. Such sparse impulse response modeling can be motivated by [benesty2006adaptive, etter1985identification] for MIMO systems. Our goal is to utilize this additional sparsity information to demonstrate performance improvement over the algorithm in [bujarbaruahAdapFIR]. Our contributions can be summarized as follows:
Offline before the control process, we compute a set containing all possible values of the unknown sparse FIR vectors corresponding to each output, with a very
high probability. This set, which we call the Feasible Sparse Parameter Set (FSPS) is computed using the Basis Pursuit Denoising [chen2001atomic] problem. 
Online during the control process, we obtain a point estimate of the unknown system inside the intersection of the FPS and the FSPS, using a Recursive Least Squares (RLS) estimator. Using this estimated system, we propagate our nominal predicted outputs used in the MPC controller objective function to improve performance. Simultaneously, we ensure satisfaction of the output chance constraints for the unknown true system.

Through numerical simulations, we demonstrate that our algorithm exhibits better performance than the algorithm presented in [bujarbaruahAdapFIR].
2 Problem Description
2.1 System Modeling and Control Objective
We consider stable linear timeinvariant systems described by a Finite Impulse Response (FIR) model of the form
(1) 
where the number of inputs and outputs considered is and , respectively. The FIR regressor vector of length is denoted by , where denotes the input at time . The matrix is a matrix comprising of the impulse response coefficients that relate inputs to the outputs of the system. The disturbance vector
is assumed to be a zeromean random variable with a known variance, componentwise bounded as
(2) 
where the are assumed known. Finally, is the measured output of the system.
Our goal is to control the output while satisfying input and output constraints of the form
(3a)  
(3b) 
where is the maximum allowed probability of output constraint violation. Following [bujarbaruahAdapFIR], we consider a single linear output chance constraint, meaning is a row vector and . Notice that joint (linear) chance constraints (3b) can be reformulated into a set of individual (linear) chance constraints using Bonferroni’s inequality, and can therefore also be addressed by our proposed framework.
Assumption 1.
We assume that each row of impulse response matrix is sparse. Without loss of generality, we further assume that the sparsity index for each row, i.e. the number of nonzero entries, is at most .
2.2 Method Outline
We assume in this paper that the system matrix in (1) is unknown. This paper proposes a method for identifying this unknown system matrix , and using the estimate in a robust control formulation to safely regulate the constrained uncertain system. Our proposed method uses the following steps:

Offline before the control process begins: Use number of collected input sequence regressors to compute a set , called the Feasible Sparse Parameter Set (FSPS). We compute this set via the Basis Pursuit Denoising problem [chen2001atomic]. The FSPS contains the true unknown model , with a high probability.

Online during the control process: At each timestep ,

Obtain the current output measurement and, using the known disturbance bounds (2), update the timevarying set , which we call the Feasible Parameter Set (FPS). The FPS is a set guaranteed to contain the true model .

Use the previous applied control inputs and measured outputs to construct an estimate of , lying in the intersection of and the offlinecomputed FSPS. The estimate is constructed using the Recursive Least Squares method.

Compute the input sequence that minimizes the objective function obtained with while satisfying the input and output constraints (3) for all models in the FPS . Apply the first computed control input and continue to step 2a.

3 Model Estimation and Adaptation
We approximate system (1) with the form
(4) 
where our model is a random variable whose support we estimate online during the control process from the output measurements. The support for the set of all possible models consistent with the recorded system data, which we call the Feasible Parameter Set (FPS), is guaranteed to contain the true model . Based on the knowledge that system (1) has sparse impulse response properties, we also construct a Feasible Sparse Parameter Set (FSPS) offline by solving the Basis Pursuit Denoising Problem. During control runtime, a point estimate of is then computed to lie in the intersection of the offlinecomputed FSPS and the onlineupdated FPS. This estimate is then used in the control design. We decouple the offline and online phases of this design process and delineate the steps in detail in the following sections.
3.1 Offline: Construct the Feasible Sparse Parameter Set
The Feasible Sparse Parameter Set (FSPS), denoted by , is a function of the (unknown) true system response , and is synthesized offline utilizing the sparsity aspect of system responses from Assumption 1. Proposition 1 clarifies how this set is synthesized. We first introduce the following definition.
Definition 1 (Restricted Isometry Property (RIP) [candesTao]).
A matrix satisfies the Restricted Isometry Property (RIP) of order , with constant , if
The order restricted isometry constant is the smallest number such that the above inequality holds.
Proposition 1.
Suppose we collect output measurements offline. Suppose for , where each and , with , and denotes the row of . If , then any solution to the Basis Pursuit Denoising optimization problem
(5) 
satisfies for some constant , and for all .
Proof.
See [YiMaBook, Theorem 3.5.1]. The proof follows from the known result of [candes2006stable, Theorem 1.1], with relaxation of to . ∎
The set is obtained as . Note that this set is synthesized offline, as the regressor vectors
are required to come from a Gaussian distribution in order to ensure the RIP property of each matrix
for . Such Gaussian inputs are not always allowable during control with system constraints (3). Details on our choice of these offline regressors to ensure with high probability are described in the Appendix.Remark 1.
As pointed out in the proof of [YiMaBook, Theorem 3.5.1], a possible choice of the numerical constant in Proposition 1 is , with . However, since is not exactly known, computing and hence accurately is not possible. We see that as . Therefore offline regressor vectors should be chosen to ensure , with . Under such choice of the offline regressors as shown in the Appendix, we pick the constant .
3.2 Online: Update the Feasible Parameter Set
Following [tanaskovic2014adaptive, bujarbaruahAdapFIR], a setmembership identification method is used for updating the timevarying FPS, denoted by . The initialization of is done considering the fact that the true system (1) is stable. We update the FPS as given by
(6) 
where is the bound of the additive disturbance given by (2). In order to bound the computational complexity of (6) over time, an alternative algorithm to compute (6) is presented in [tanaskovic2014adaptive].
3.3 Online: Obtain Point Estimate
We rewrite (4) as
where and are shown in the Appendix. Furthermore, let be the variance of the disturbance . Let the initial prior mean and variance estimates for true system be and respectively. Now, the conditional mean and variance estimates, given measurements up to , can be obtained using the Recursive Least Squares method [anderson1979, Sec. (3.1)]. These estimates may differ from the true conditional distribution parameters, as is not necessarily assumed to be Gaussian.
We ensure that the mean point estimate at any time instant is chosen as a point contained in a set , that is, , with
(7) 
One way of obtaining such a constrained estimate of in is to project the mean. As shown in [bujarbaruahAdapFIR], this is achieved by solving
(8) 
where . The mean in matrix form, that is, is obtained by reorganizing into columns. This provides the minimum mean squared error estimate (with any linear estimator) of the true system . Note that (8) is a convex optimization problem, as the set in (7) is an intersection of two convex sets. We can further obtain a polytopic by outer approximation of with the infinity norm.
Remark 2.
In [bujarbaruahAdapFIR] the set in (7) is set as the FPS for all timesteps . Thus [bujarbaruahAdapFIR] does not utilize the sparsity information of rows of to construct the nominal point model estimate . The construction of in (7) with our proposed approach utilizes such sparsity information from Assumption 1. However, this comes with additional offline computation of , as shown in Proposition 1.
4 Control Synthesis
4.1 Prediction Model
Let be the prediction horizon for a predictive controller for system (1). We denote the predicted system outputs at time by , for some . Similarly, denotes the predicted regressor vector, for , and is computed as
(9) 
where, the matrices and are as reported in the Appendix. With these predicted regressor vectors, the estimated system obtained in Section 3.3 is used to propagate the nominal predicted outputs as , for all . This is shown in the optimization problem presented in Section 4.3.
4.2 Reformulation of Chance Constraints
Within each prediction horizon we enforce , where is a function of some . Therefore, to ensure satisfaction of (3b) despite uncertainty in the true system, we must satisfy the constraint for all . Using the theory of distributionally robust optimization [calafiore2006distributionally, zymler2013distributionally], we can conservatively approximate the output chance constraints (3b) as
(10) 
where we have , and . Here, is an appended covariance matrix shown in the Appendix. As is a polytope, (10) can be succinctly written as
(11) 
where denote the vertices of the polytope .
4.3 MPC Problem
We solve the following optimization problem for given weight matrices :
(12) 
where , and the regressor is as in (9). We have included the terminal constraint on the regressor vector as given in [tanaskovic2014adaptive]
(13) 
This means the terminal regressor corresponds to a steady state (i.e. the last control inputs in a horizon are kept constant). Problem (12) is a convex optimization problem and can be solved with existing solvers. After solving (12), we apply the first input
(14) 
to system (1) in closedloop. We then resolve (12) at timestep with new estimated data and . This yields a recedinghorizon control scheme. The resulting algorithm is summarized in Algorithm 1.
Theorem 1.
Proof.
Let (12) be feasible at timestep and let the corresponding optimal input sequence be . After applying control (14) in closedloop to (1), consider a candidate open loop control sequence at the next timestep as
(15) 
This candidate sequence (15) satisfies the input constraints (3a) and the terminal constraint (13) at . So, using candidate sequence (15) and condition (13), we obtain
(16a)  
(16b) 
To show feasibility of (12) at , we finally must ensure that (11) satisfied with (15) at . That is, we require for all
(17) 
for all vertices. Now, for the chosen input sequence (15), by using (13) and (16), condition (17) can be expressed as
(18) 
for all . We can now guarantee that (18) will be satisfied at timestep with (15). This is due to the observation that feasible parameter set follows as new cuts (6) are introduced at each timestep. So vertices at timestep are convex combinations of the ones at . Thus, control sequence (15) is feasible for (12) at timestep . This completes the proof. ∎
5 Numerical Simulations
We present simulation results for a simple singleinput, singleoutput system^{1}^{1}1We choose this model for the sake of simulation simplicity. Sparsity in FIR models is well motivated typically for MIMO systems.. We compare the performance of our Algorithm 1 with that from the adaptive stochastic MPC presented in [bujarbaruahAdapFIR]. This performance is measured in terms of the expected closedloop cost , where the closedloop cost of any trajectory which is a function of the disturbance realization , is given by
where for Algorithm 1 is obtained as in (7), and for the algorithm in [bujarbaruahAdapFIR], , as Remark 2 points out. For simulating both the algorithms, we use the parameters given in Table 1, with the true system response given as , which is sparse.
Parameter  Value  Parameter  Value 

We run Monte Carlo simulations with both algorithms for randomly chosen disturbance sequences . We approximate the average closedloop cost with the empirical average
(19) 
where represents the Monte Carlo sample.
5.1 Cost Comparison
Fig. 1 shows the comparison of closedloop cost expressed as for different Monte Carlo draws of trajectories, obtained with Algorithm 1 and the algorithm in [bujarbaruahAdapFIR]. We see that the empirical average closedloop cost obtained as (19) for Algorithm 1 is around lower than the corresponding value obtained with [bujarbaruahAdapFIR].
This demonstrates performance gain by Algorithm 1 as a consequence of leveraging sparsity information of via the FSPS set . Furthermore, the closedloop costs obtained for out of the trajectories were lower with Algorithm 1 compared to the corresponding cost obtained with [bujarbaruahAdapFIR]. This improvement is cost is explained in the next section via a closer look at the corresponding trajectories of the system.
5.2 Closedloop Trajectory Comparison
In order to further analyze the performance gain seen above, we consider the difference in absolute value of the outputs along each one of 100 trajectories (Fig. 2). Specifically, we analyze
for , where and denote the outputs obtained using Algorithm 1 and the algorithm in [bujarbaruahAdapFIR] respectively. We see from Fig. 2 that out of all timesteps simulated, only at about of them is . This implies that at about of all timesteps simulated, Algorithm 1 resulted in outputs closer to the origin, as desired.
Fig. 3 further plots the difference in absolute value of corresponding closedloop inputs, i.e.,
for along all the 100 trajectories, where and denote the optimal closedloop inputs obtained using Algorithm 1 and the algorithm in [bujarbaruahAdapFIR] respectively. At only of the 2000 timesteps , which indicates that the improved output regulation in Fig. 2 does not come at the cost of higher magnitudes of inputs. Thus, Fig. 2 and Fig. 3 combined, provides a comprehensive explanation of cost improvement in Fig. 1.
6 Conclusions
We proposed an Adaptive Stochastic Model Predictive Control (MPC) algorithm for stable linear timeinvariant MIMO systems expressed with an FIR model. The parameters of the FIR model corresponding to each output are sparse, but unknown. We solve a Basis Pursuit Denoising problem offline before the control process, which utilizes the sparsity information of the system model to give set based bounds containing the true system parameters. Online during control process, we estimate these parameters with the Recursive Least Squares algorithm and the estimates are refined using the set based bounds obtained offline. With Set Membership Methods, our MPC controller safely ensures satisfaction of all imposed constraints by the unknown true system. With a thorough numerical example we demonstrated performance gain over the algorithm of [bujarbaruahAdapFIR].
Appendix
Choosing Offline Regressors
Following [YiMaBook, Theorem 3.3.4], there exists a numerical constant such that if
is a random matrix with entries independent
random variables, with high probability, , provided(20) 
Offline regressor vectors are thus chosen satisfying (20), with each entry of as . The constant
can be exactly found using properties of phase transition in compressed sensing
[donoho2006compressed].Matrix Notations
We define
where denotes the row of . Moreover,
Based on this matrix we get:
which gives .
Chance Constraint to Conic Optimization
For this part we repeat the derivation of [bujarbaruahAdapFIR]. To ensure satisfaction of (3b) with unknown true system , we must satisfy them for all . Thus is treated as a deterministic variable (mean and variance ) while reformulating (3b) to equivalent convex constraints. Therefore, for all , we have:
We denote,
where denotes the mean of a quantity . Moreover,
From these we can derive the variance of as
As in [bujarbaruahAdapFIR], we assume no correlation between the disturbance and the impulse response distribution. Now, (3b) is reformulated into secondorder cone constraints following [calafiore2006distributionally]
(21) 
for all , where for any bounded disturbance distributions
with known moments. After simplifications, (
21) can be written for all as
Comments
There are no comments yet.