Automated Synthesis of Safe Digital Controllers for Sampled-Data Stochastic Nonlinear Systems

by   Fedor Shmarov, et al.

We present a new method for the automated synthesis of digital controllers with formal safety guarantees for systems with nonlinear dynamics, noisy output measurements, and stochastic disturbances. Our method derives digital controllers such that the corresponding closed-loop system, modeled as a sampled-data stochastic control system, satisfies a safety specification with probability above a given threshold. The proposed synthesis method alternates between two steps: generation of a candidate controller pc, and verification of the candidate. pc is found by maximizing a Monte Carlo estimate of the safety probability, and by using a non-validated ODE solver for simulating the system. Such a candidate is therefore sub-optimal but can be generated very rapidly. To rule out unstable candidate controllers, we prove and utilize Lyapunov's indirect method for instability of sampled-data nonlinear systems. In the subsequent verification step, we use a validated solver based on SMT (Satisfiability Modulo Theories) to compute a numerically and statistically valid confidence interval for the safety probability of pc. If the probability so obtained is not above the threshold, we expand the search space for candidates by increasing the controller degree. We evaluate our technique on three case studies: an artificial pancreas model, a powertrain control model, and a quadruple-tank process.


page 1

page 2

page 3

page 4


Formal Synthesis of Analytic Controllers for Sampled-Data Systems via Genetic Programming

This paper presents an automatic formal controller synthesis method for ...

Neural Lyapunov Control of Unknown Nonlinear Systems with Stability Guarantees

Learning for control of dynamical systems with formal guarantees remains...

Learning Safe Neural Network Controllers with Barrier Certificates

We provide a novel approach to synthesize controllers for nonlinear cont...

Synthesize Efficient Safety Certificates for Learning-Based Safe Control using Magnitude Regularization

Energy-function-based safety certificates can provide provable safety gu...

A New Simulation Metric to Determine Safe Environments and Controllers for Systems with Unknown Dynamics

We consider the problem of extracting safe environments and controllers ...

Negative Selection Approach to support Formal Verification and Validation of BlackBox Models' Input Constraints

Generating unsafe sub-requirements from a partitioned input space to sup...

Run-Time Risk Mitigation in Automated Vehicles: A Model for Studying Preparatory Steps

We assume that autonomous or highly automated driving (AD) will be accom...

1. Introduction

Digital control (Ogata, 1995) is essential in many cyber-physical and embedded systems applications, ranging from aircraft autopilots to biomedical devices, due to its superior flexibility and scalability, and lower cost compared to its analog counterpart. The synthesis of analog controllers for linear systems is well-studied (Nise, 2016), but its extension to nonlinear and stochastic systems has proven much more challenging. Furthermore, digital control adds extra layers of complexity, e.g., time discretization and signal quantization. A common problem in both digital and analog control is the lack of automated synthesis techniques with provable guarantees, especially for properties beyond stability (e.g., safety) for nonlinear stochastic systems.

In this paper we address this problem by introducing a new method for the synthesis of probabilistically safe digital controllers for a large class of stochastic nonlinear systems, viz. sampled-data stochastic control systems. In such systems, the plant is a set of nonlinear differential equations subject to random disturbances, and the digital controller samples the noisy plant output, generating the control input with a fixed frequency.

Controllers are usually designed to achieve stability of the closed-loop system. Lyapunov’s indirect method provides conditions under which the stability of an equilibrium point of a nonlinear system follows from the stability of that point for the linearized version of the system (Khalil, 2002). Lyapunov’s method for sampled-data nonlinear systems is much more involved. Previous work (Nesic and Teel, 2004) provides sufficient conditions on the sampled-data linearized system that ensure stability of the sampled-data nonlinear system. Unfortunately, it is difficult to verify these conditions algorithmically. In this paper, we instead prove necessary conditions for stability that are easy to verify, and use them to restrict the controller synthesis domain. However, a stable system is not necessarily safe, as during the transient the system might reach an unsafe, catastrophic state. The synthesis approach that we propose overcomes this issue by deriving controllers that are safe.

Given an invariant (i.e., a correctness specification), and a nonlinear plant with stochastic disturbances and noisy outputs, our method synthesizes a digital controller such that the corresponding closed-loop system satisfies with probability above a given threshold . The synthesis algorithm (Algorithm 1 in Section 5) is illustrated in Figure 1. It works by alternating between two steps: generation of a candidate controller , and verification of the candidate. is generated via the optimize procedure (see Algorithm 2), which maximizes a Monte Carlo estimate of the satisfaction probability by simulating a discrete-time approximation of the system with a non-validated ODE solver. Such a candidate is, therefore, sub-optimal but very rapid to generate. To rule out unstable controller candidates, we prove and utilize Lyapunov’s indirect method for instability of sampled-data nonlinear systems. Along with , optimize returns an approximate confidence interval (CI) for the satisfaction probability.

Next, in the verification step (procedure verify), we use a validated solver based on SMT (Satisfiability Modulo Theories) to compute a numerically and statistically valid CI for the satisfaction probability of . If the deviation between the approximate CI and the precise CI is too large, indicating that the candidates generated by optimize are not sufficiently accurate, we increase the precision of the non-validated, fast solver (procedure update_discretization). If instead the precise probability is not above the threshold , we expand the search space for candidates by increasing the controller degree.

Summarizing, the novel contributions of this paper are:

  • we synthesize digital controllers for nonlinear systems subject to stochastic disturbances and measurement noise, while state-of-the-art approaches consider linear systems only;

  • we prove Lyapunov’s indirect method for instability of nonlinear systems in closed-loop with digital controllers;

  • we present a novel algorithm that synthesizes digital controllers with guaranteed probabilistic safety properties.

Figure 1. Overview of the synthesis approach.

2. Sampled-data Stochastic Systems

We consider sampled-data stochastic control systems (SDSS), a rich class of control systems where the plant is specified as a nonlinear system subject to random disturbances. The controller periodically samples the plant output subject to random noise generating, using the plant output’s history, a control input that is kept constant during the sampling period with a zero-order hold; see Figure 2. The controller is characterized by a number of unknown parameters, which are the target of our synthesis algorithm.

Definition 2.1 (Sampled-data Stochastic Control System).

An SDSS can be described in the following state-space notation:


where is the state of the plant; is the initial state at time ; is the disturbance; is the plant output, which is a function of the state with additive i.i.d. noise with covariance matrix ; is the control input, updated at every sampling period by the digital controller (defined in Section 3); and

is the vector of unknown controller parameters, where

is a hyperbox (i.e., a product of closed intervals). The dynamics of the plant is governed by the vector field , which is assumed to be in , hence Lipschitz-continuous. We also assume that the output map is in .

We assume that there are no time lags for transmitting the plant output to the controller and the control input to the plant. The disturbance is a piecewise-continuous function having, for a time horizon , a finite number of discontinuities. The discontinuity points and the value of at each sub-domain can be defined in terms of a finite number of random parameters drawn from arbitrary distributions. Note that these assumptions on are reasonably mild and allow us to define very general classes of systems, which subsume, for instance, numerical solutions of stochastic differential equations (Rüemelin, 1982).111Such numerical solutions rely on computing the value of the Wiener process at discrete time points, which makes it a special case of our disturbances.

Figure 2. Diagram of a sampled-data stochastic control system.

3. Digital Controllers

The operation of a digital controller is succinctly indicated in Equation (1). These computations are generally performed using current and past output samples and past input samples.

Definition 3.1 (Digital Controller for SDSS).

Given an SDSS, we denote and and define the tracking error as


where is the reference signal. The output of the controller is


where for 222Note that if the controller has been previously deployed, i.e., it starts from a non-empty history, then may be nonzero for . and is the controller degree.

Controller design amounts to finding a degree and coefficients that ensure the desired behavior of the closed-loop system. The vector of parameters defined in (1) is recovered by setting . An alternative description of the controller is via the state-space representation


where is the state of the controller and matrices need to be designed. The above two representations are equivalent. Given a controller in the form of (3), one can transform it to the representation (4), for instance by taking states as memories that store previous values of inputs/outputs. Given matrices , one can easily compute coefficients in (3) using matrix multiplications (Ogata, 1995).

3.1. Stability of the closed-loop system

A necessary requirement of any controller is stability of the closed-loop system. In our setting, we have the influence of both external inputs and initial states . A suitable notion is input-to-state stability (ISS) (Sontag, 2008), which implies that bounded input signals must result in bounded outputs and, at the same time, that the effect of initial states must disappear as time goes to infinity. A necessary requirement of ISS is Lyapunov stability of the system ‘without’ external inputs, stated in the next definition, a requirement that can be applied to both continuous- and discrete-time systems.

Definition 3.2 ().

Consider a dynamical system with state space and without any external inputs, where is an equilibrium point and is open. Then is called Lyapunov stable if for every there exists a such that for all with , we have for all .

It is very easy to verify stability for discrete-time linear systems.

Proposition 3.3 (Stability of Linear Systems (Khalil, 2002)).

A linear discrete-time system is Lyapunov stable at and

if and only if all eigenvalues of

are inside unit circle. This condition is equivalent to the existence of positive definite matrices such that .

In the remainder of this section, we consider a version of SDSS in Definition 2.1 controlled by (4) without any external input, i.e., when are identically zero. We study Lyapunov stability of the closed-loop system without external inputs, which is necessary for having input-to-state stability. Let us put and define , , with being the equilibrium point of SDSS (1), i.e., . Similarly, define with and with being the equilibrium point for the controller. We then denote the plant dynamics after eliminating external inputs based on shifted version of variables by


where and Thus is an equilibrium point for (5). The digital controller dynamics is likewise given by


where the minus sign is due to and negative feedback in (2). The nonlinear system (5) is controlled with the digital controller (6) by setting for all , .

Ensuring stability of the sampled-data nonlinear control system (5)-(6) is difficult in general. Sufficient conditions for preserving stability under linearization are provided in (Nešić et al., 1999; Nesic and Teel, 2004), but they are hard to verify automatically. Rather, we provide an easy-to-check necessary condition for Lyapunov stability to reject unsuitable controllers. This necessary condition is based on Lyapunov’s indirect method developed here for sampled-data nonlinear systems. In the following we prove that if the linearized closed-loop system has an eigenvalue outside the unit circle, the nonlinear closed-loop system (5)-(6) is not Lyapunov stable, thus the system (1)-(4) is not input-to-state stable.

We first consider the linearized version of the closed-loop system (5)-(6), which is

where , , and , Define and i.e., the non-linear terms describing the deviation between non-linear and linearized functions. Thus, we have

Then, for any there exists an such that


for all with . We now simplify the dynamics of the closed-loop nonlinear system as


The next lemma establishes a bound on for any , as a function of and . Due to space constraints we present the proof of this lemma in the appendix.

Lemma 3.4 ().

Under dynamics (8), for a given and any , we have


if for all , with satisfying property (7). Functions are continuous and nonnegative with and .

The upper bound (9) enables us to study the effect of the nonlinear terms and in the sampled version of the dynamics, which can be written as

with , , , and


Next, we derive a bound for in terms of and .

Lemma 3.5 ().

For any , there exist continuous functions such that the following inequality holds for defined in (10),

if for all , with satisfying property (7). Functions are nonnegative with and .

The explicit form of is provided, along with the proof, in the appendix. We are now ready to state our main result of this section.

Theorem 3.6 ().

The continuous-time nonlinear system (5) controlled with the digital controller (6) is unstable if the linearized continuous-time system controlled by the same digital controller has a pole outside the unit circle.

Sketch of the proof. We prove the theorem by contradiction. We show that there is an such that for any we can find an initial state for the system and the controller with and time with . We construct a Lyapunov function for the linear system that is strictly increasing on a suitable set of initial states. By a proper selection of , we show that this Lyapunov function is also strictly increasing on the nonlinear system if the trajectory remains inside the ball with radius , which is a contradiction.


The closed-loop linearized system will have the following dynamics in discrete time

with and . This gives the following state transition matrix


which is assumed to have at least one eigenvalue outside the unit circle. We cluster the eigenvalues of into a group of eigenvalues outside the unit circle and a group of eigenvalues on or inside the unit circle. Then there is a nonsingular matrix such that

where is stable. In other words, contains all of the eigenvalues of from the first group. (The matrix can be found for instance by transforming into its real Jordan form.) Let us define

where the partitions of and are compatible with the dimensions of and . The dynamics of becomes

Now define by

Then both and are stable matrices. According to Proposition 3.3, there are positive definite matrices and such that the following matrix equalities hold

and we get

with and . For the function

we have

where the positive constant

is obtained using definition of as a function of and Lemma 3.5 as

Similarly, we use the Cauchy-Schwarz inequality for


Then, function satisfies

with .

Take sufficiently small such that with its associated radius . Note that this is always possible since , thus , are bounded on the interval . Then we have as long as . For the proof of instability, take radius

and any initial condition such that

We claim that the trajectory starting from will always leave the ball with radius . Suppose this is not true, i.e., for all . Then for all and and . Then , which contradicts the boundedness of . ∎

Stability test

The characteristic polynomial of the linearized system (11) is

which is a polynomial whose coefficients depend on the choice of parameters for the digital controller (1) in either of the representations (3)-(4). As we have shown in Theorem 3.6, if this polynomial has a root outside unit circle, i.e., , then the closed-loop sampled-data nonlinear system is unstable, so we can eliminate that controller from the synthesis domain.

4. Digital Controller Synthesis

In this section we define the digital controller synthesis problem for SDSSs, using the same notation of Definition 2.1. Recall that we consider controller parameters (where is a hyperbox in ), a finite time horizon , and a safety invariant defined as a predicate over the SDSS state vector (a quantifier-free FOL formula over the theory of nonlinear real arithmetic). The synthesized controller should ensure safety with respect to the probability measure induced by the stochastic disturbance and the measurement noise . In particular, the probability of satisfying for all should be above a given, user-defined threshold . We further constrain the controller search by limiting the maximum controller degree to (see (3) in Definition 2.1).

Below, we denote by the stochastic uncertainty due to the SDSS disturbances and the measurement noise up to time .

Definition 4.1 (Digital Controller Synthesis).

Given an SDSS, a time bound , a maximum controller degree , and a probability threshold , the digital controller synthesis problem is finding the degree

and controller parameters where

and is the parameter space of controllers of degree . For clarity we indicate that depends (indirectly) on the controller parameters and on the stochastic uncertainty . If for all , then we say the problem is infeasible.

In general the above synthesis problem is very hard to solve exactly due to the presence of nonlinearities, ordinary differential equations (ODEs) introduced by the SDSS dynamics, and multi-dimensional integration for computing probabilities. In fact, a decision version of the digital controller synthesis problem (

i.e., given decide whether is nonempty), is easily shown to be undecidable. While Satisfiability Modulo Theory (SMT) approaches, e.g., (Gao et al., 2012), can now in principle handle nonlinear arithmetics via a sound numerical relaxation, their computational complexity is exponential in the number of variables. In particular, our stochastic optimization problem is high-dimensional and currently infeasible for fully formal approaches, but it can be tackled using the mixed SMT-statistical approach of (Shmarov and Zuliani, 2016), which computes statistically and numerically sound confidence intervals. As such, we replace (exact) probability with empirical mean, thereby obtaining a Monte Carlo version of Definition 4.1.

Definition 4.2 (Digital Controller Synthesis - Monte Carlo).

Given an SDSS , a time bound , a maximum controller degree , a probability threshold , and a confidence value . The Monte Carlo digital controller synthesis problem is finding

and controller parameters where

and is a finite-dimensional random vector in which each is independent and identically distributed from , is the product measure of copies of the probability measure of , and , with being the indicator function.

Note that by the law of large numbers when

and the confidence value is sufficiently close to one, we have that approximates arbitrarily well. Also, note that the constraints in Definition 4.2 are simpler to solve than those in 4.1, as they do not involve absolutely precise integration of the probability measure — we only require to decide the constraints with some statistical confidence . However, even with this simplification a decision version of the Monte Carlo digital controller synthesis problem (i.e., deciding whether ) remains undecidable when plants with nonlinear ODEs are involved. Intuitively, that is because evaluating the elements of amounts to solving reachability, which is well known to be an undecidable problem for general nonlinear systems. Hence, one can only solve the Monte Carlo controller synthesis problem approximately, and that is what we aim to do in the next Section. Finally, note that we can generate sample realizations of : recall from Section 2 that the disturbance

is defined from a finite number of random variables. This implies that

is finite-dimensional as the number of random noise variables is also finite due to the sampling period.

5. Synthesis Algorithm

In this section we present an algorithm for approximately solving the Monte Carlo controller synthesis problem of Definition 4.2. Our synthesis algorithm starts from controllers with degree and iteratively increase until the constraint is satisfied or reaches a maximum value.

The synthesis algorithm, summarized in Algorithm 1, consists of two nested loops. The inner loop (lines 1-9) consists of two main stages: optimization and verification. Procedure optimize (line 1) aims at finding controller parameters that (approximately) maximizes the empirical probability that the closed-loop system with a discrete-time version of the plant satisfies property over the finite time horizon ; optimize also returns an approximate confidence interval (CI) for such probability. Optimization is based on the cross-entropy algorithm, but our approach could work with different black-box optimization algorithms too, such as Gaussian process optimization (Rasmussen, 2004)

and particle swarm optimization 

(Kennedy, 2011). Then, procedure verify (line 1) checks the candidate controller in closed-loop with the original (continuous-time) plant model and computes a precise CI for . The reason for using the continuous-time plant only in verify is due to the high computational complexity of validated numerical ODE solving compared to solving its discrete-time approximation. The interval returned by verify is compared against the current best verified interval, which is then updated accordingly (line 1).

The procedures optimize and verify are iterated until the approximate CI (for the discrete-time plant) and the verified CI (for the continuous-time plant) overlap to a certain length, or a maximum number of iterations is reached (line 9). In the outer loop, if (i.e., the LHS of the verified confidence interval is larger than ) then is a witness for , with probability at least . Therefore, approximately solves the digital controller synthesis problem of Definition 4.2, and the algorithm terminates. (The approximation lies in the fact that we cannot guarantee that the synthesized controller has minimum degree.) Otherwise (), we increase the controller degree up to a maximum degree .

In the inner loop, line 1 improves the approximation of the closed-loop system used in optimize. This can be any adjustment to the ODE solver complexity (e.g., increasing the Taylor series order). In our case, it corresponds to increasing the number of time points used for ODE integration. We next explain both optimize and verify in more detail.

Input :  – SDSS, -- maximum controller degree,
-- parameters domain,
-- probability threshold,
-- initial solver discretization,
– factor for tuning (interval overlap),
– confidence interval size, -- confidence value.
Output :  - best performing controller.
1 ; ; repeat
2       ; repeat
3             ; ; if  then ; ;
4      until  or ;
until  or return ;
Algorithm 1 Main Synthesis Algorithm

Procedure optimize

In Algorithm 2 we give the pseudocode for the optimize function (line 1 of Algorithm 1). It implements a modified cross-entropy (CE) optimization algorithm (Rubinstein, 1999) that repeatedly samples (from the CE distribution of) controller parameters, evaluates their performance, and guides the search towards parameters that increase the safety probability (i.e., probability of satisfying over ). Sample performance is computed first by the stability check using Theorem 3.6 (line 2 of Algorithm 2). If a controller does not pass the test, i.e., it is necessarily unstable, it is rejected. Otherwise we compute a CI for the probability (over ) of the closed-loop system to satisfy the invariant (line 2). For this purpose, we consider a discrete-time version of the plant, simulated via an approximate ODE solver based on the first term of the Taylor series expansion. The time steps for ODE integration are obtained by discretizing the time between the controller sampling points (defined by — see Definition 2.1) using the discretization parameter .

To compute the CI, our implementation uses sequential Bayesian estimation for efficiency reasons (Shmarov and Zuliani, 2015), but other standard statistical techniques may also be employed (e.g., the Chernoff-Hoeffding bound). After an adequate number of controller parameters are sampled and evaluated, the best performing sample is chosen (line 2), and the CE distribution is updated accordingly (line 2, see (Rubinstein, 1999) for more details). This is repeated until a maximum number of iterations is reached.

// Modified Cross-Entropy (CE) algorithm
1 ;  
2 repeat
3       // sample performance queue
4       repeat
5             := sample controller parameters from CE distribution
6             if ( passes stability check) then // Theorem 3.6
7                   := confidence interval, with size and confidence , for probability of satisfying with plant discretization m and controller
8            else  // add sample performance
10      until 
11       update CE distribution using // discard head
12       // empty queue
until return
Algorithm 2

Procedure verify

We use the ProbReach tool (Shmarov and Zuliani, 2015) to compute a CI for the probability that a candidate digital controller in closed-loop with the plant satisfies the time-bounded invariant (line 1 of Algorithm 1). This step is necessary since the candidate controller has been obtained using an approximate, discrete-time solver for simulating the (continuous) plant dynamics, while ProbReach uses instead an SMT solver (Gao et al., 2013) to handle the plant dynamics in a sound manner. In particular, ProbReach allows to derive a guaranteed confidence interval for with confidence , where is the number of samples of the Monte Carlo synthesis problem (see Definition 4.2). (We note that in general will depend on the size of the interval and the confidence .)

Procedure verify consists of two steps. The first step builds a hybrid system (the model format accepted by ProbReach) representing the closed-loop system under the candidate controller. In the second step, verify invokes ProbReach with three parameters: the hybrid system, and the required minimum size and confidence of the confidence interval to compute. We remark that the size of the confidence interval cannot be guaranteed in general (Shmarov and Zuliani, 2016) because of the undecidability of reasoning about nonlinear arithmetics. As such, the confidence interval returned by ProbReach via verify can be fully trusted from both the statistical and numerical viewpoints: while the interval size might be larger than , the confidence is guaranteed to be at least , as the sampled controllers are evaluated by SMT and verified numerical techniques.

Theorem 5.1 ().

Let be an SDSS for which the synthesis problem of Definition 4.1 is feasible for a given and controller degree . Suppose that Algorithm 1, with parameters and , returns a controller and an interval such that . Then is a solution of the Monte Carlo problem of Definition 4.2 for , and .

Proof (sketch).

It suffices to note that Algorithm 1 terminates either by finding a controller of minimal degree whose safety probability is at least , with confidence , or by finding a controller of degree with the best confidence interval produced by verify. By hypothesis a controller of degree that satisfies