Safe Interactive Model-Based Learning

11/15/2019 ∙ by Marco Gallieri, et al. ∙ NNAISENSE 20

Control applications present hard operational constraints. A violation of this can result in unsafe behavior. This paper introduces Safe Interactive Model Based Learning (SiMBL), a framework to refine an existing controller and a system model while operating on the real environment. SiMBL is composed of the following trainable components: a Lyapunov function, which determines a safe set; a safe control policy; and a Bayesian RNN forward model. A min-max control framework, based on alternate minimisation and backpropagation through the forward model, is used for the offline computation of the controller and the safe set. Safety is formally verified a-posteriori with a probabilistic method that utilizes the Noise Contrastive Priors (NPC) idea to build a Bayesian RNN forward model with an additive state uncertainty estimate which is large outside the training data distribution. Iterative refinement of the model and the safe set is achieved thanks to a novel loss that conditions the uncertainty estimates of the new model to be close to the current one. The learned safe set and model can also be used for safe exploration, i.e., to collect data within the safe invariant set, for which a simple one-step MPC is proposed. The single components are tested on the simulation of an inverted pendulum with limited torque and stability region, showing that iteratively adding more data can improve the model, the controller and the size of the safe region.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Approach rationale

Safe Interactive Model-Based Learning (SiMBL) aims to control a deterministic dynamical system:

(1)

where is the state and are the measurements, in this case assumed equivalent. The system (1) is sampled with a known constant time and it is subject to closed and bounded, possibly non-convex, operational constraints on the state and input:

(2)

The stability of (1) is studied using discrete time systems analysis. In particular, tools from discrete-time control Lyapunov functions (Blanchini; Khalil_book) will be used to compute policies that can keep the system safe.

Safety.

In this work, safety is defined as the capability of a system to remain within a subset of the operational constraints and to return asymptotically to the equilibrium state from anywhere in . A feedback control policy, , is certified as safe if it can provide safety with

high probability

. In this work, safety is verified with a statistical method that extends bobiti_samplingdriven_nodate.

Safe learning.

Figure 1: Safe interactive Model-Based Learning (SiMBL) rationale. Approach is centered around an uncertain RNN forward model for which we compute a safe set and a control policy using principles from robust control. This allows for safe exploration though MPC and iterative refinement of the model and the safe set. An initial safe policy is assumed known.

The proposed framework aims at learning a policy, , and Lyapunov function, , by means of simulated trajectories from an uncertain forward model and an initial policy , used to collect data. The model, the policy, and the Lyapunov function are iteratively refined while safely collecting more data through a Safe Model Predictive Controller (Safe-MPC). Figure 1 illustrates the approach.

Summary of contribution.

This work presents the algorithms for: 1) Iteratively learning a novel Bayesian RNN model with a large posterior over unseen states and inputs; 2) Learning a safe set and the associated controller with neural networks from the model trajectories; 3) Safe exploration with MPC. For 1) and 2), we propose to retain the model from scratch using a consistency prior to include knowledge of the previous uncertainty and then to recompute the safe set. The safe set increase as more data becomes available and the safety of the exploration strategy are demonstrated on an inverted pendulum simulation with limited control torque and stability region. Their final integration for continuous model and controller refinement with data from safe exploration (see Figure

1) is left for future work.

2 The Bayesian recurrent forward model

A discrete-time stochastic forward model of system (1) is formulated as a Bayesian RNN. A grey-box approach is used, where available prior knowledge is integrated into the network in a differentiable way (for instance, the known relation between an observation and its derivative). The model provides an estimate of the next states distribution that is large (up to a defined value) where there is no available data. This is inspired by recent work on Noise Contrastive Priors (NCP) (hafner_reliable_2018). We extend the NCP approach to RNNs, and propose the Noise Contrastive Prior Bayesian RNN (NCP-BRNN), with full state information, which follows the discrete-time update:

(3)
(4)
(5)
(6)

where denote the state and measurement estimated from the model at time , and is drawn from the distribution model, where and are computed from neural networks, sharing some initial layers. In particular, combines an MLP with some physics prior while the final activation of

is a sigmoid which is then scaled by the hyperparameter

, namely, a finite maximum variance. The next state distribution depends on the current state estimate

, the input , and a set of unknown constant parameters , which are to be learned from the data. The estimated state is for simplicity assumed to have the same physical meaning of the true system state

. The system state is measured with a Gaussian uncertainty with standard deviation

, which is also learned from data. During the control, the measurement noise is assumed to be negligible (). Therefore, the control algorithms will need to be robust with respect to the model uncertainty. Extensions to partial state information and output noise robust control are also possible but are left for future work.

Towards reliable uncertainty estimates with RNNs

The fundamental assumption for model-based safe learning algorithm is that the model predictions contain the actual system state transitions with high probability (berkenkamp_safe_2017). This is difficult to meet in practice for most neural network models. To mitigate this risk, we train our Bayesian RNNs on sequences and include a Noise-Contrastive Prior (NCP) term (hafner_reliable_2018). In the present work, the uncertainty is modelled as a point-wise Gaussian with mean and standard deviation that depend on the current state as well as on the input. The learned 1-step standard deviation, , is assumed to be a diagonal matrix. This assumption is limiting but it is common in variational neural networks for practicality reasons (zhao_infovae_2017; chen_variational_2016). The NPC concept is illustrated in Figure 2. More complex uncertainty representations will be considered in future works.

The cost function used to train the model is:

(7)

where the first term is the expected negative log likelihood over the uncertainty distribution, evaluated over the training data. The second term is the KL-divergence which is evaluated in closed-form over predictions generated from a set of background initial states and input sequences, and

. These are sampled from a uniform distribution for the first model and then, once a previous model is available and new data is collected, they are obtained using rejection sampling with PyMC

(Salvatier2016) with acceptance condition: . If a previous model is available, then the final term is used which is an uncertainty consistency prior which forces the uncertainty estimates over the training data to not increase with respect to the previous model. The loss (2) is optimised using stochastic backpropagation through truncated sequences. In order to have further consistency between model updates, it a previous model is available, we train from scratch but stop optimising once the final loss of the previous model is reached.

Figure 2: Variational neural networks with Noise Contrastive Priors (NCP). Predicting sine-wave data (red-black) with confidence bounds (blue area) using NAIS-Net (Ciccone2018NAISNetSD) and NCP (hafner_reliable_2018).

3 The robust control problem

We approximate a chance-constrained stochastic control problem with a min-max robust control problem over a convex uncertainty set. This non-convex min-max control problem is then also approximated by computing the loss only at the vertices of the uncertainty set. To compensate for this approximation, (inspired by variational inference) the centre of the set is sampled from the uncertainty distribution itself (Figure 3). The procedure is detailed below.

Lyapunov-Net.

The considered Lyapunov function is:

(8)

where is a feedforward network that produces a matrix, where and are hyperparameters. The network parameters have to be trained and they are omitted from the notation. The term represents the prior knowledge of the state constraints. In this work we use:

(9)

where is the Minkowski functional111Minkowski functionals measure the distance from the set center and they are positive definite. of a user-defined usual region of operation, namely:

(10)

Possible choices for the Minkowski functional include quadratic functions, norms or semi-norms (Blanchini; Horn:2012:MA:2422911). Since must be positive definite, the hyperparameter is introduced 222The trainable part of the function is chosen to be piece-wise quadratic but this is not the only possible choice. In fact one can use any positive definite and radially unbounded function. For the same problem multiple Lyapunov functions can exist. See also Blanchini.. While other forms are possible as in Blanchini, with (8

) the activation function does not need to be invertible. The study of the generality of the proposed function is left for future consideration.

Safe set definition.

Denote the candidate safe level set of as:

(11)

where is the safe level. If, for , the function satisfies the Lyapunov inequality over the system closed-loop trajectory with a control policy , namely,

(12)

then set is safe, i.e., it satisfies the conditions of positive-invariance (Blanchini; Kerrigan:2000). Note that the policy can be either a neural network or a model-based controller, for instance a Linear Quadratic Regulator (LQR, see KalmanLQR) or a Model Predictive Controller (MPC, see Maciejowski_book; rawlingsMPC; Cannon_book; Rakovic2019). A stronger condition to eq. (12) is often used in the context of optimal control:

(13)

where is a positive semi-definite stage loss. In this paper, we focus on training policies with the quadratic loss used in LQR and MPC, where the origin is the target equilibrium, namely:

(14)

From chance constrained to min-max control

Consider the problem of finding a controller and a function such that and:

(15)

where represents a probability and . This is a chance-constrained non-convex optimal control problem (Cannon_book). We truncate the distributions and approximate (15):

(16)

which is deterministic. A strategy to jointly learn fulfilling (16) is presented next.

4 Learning the controller and the safe set

We wish to build a controller , a function , and a safe level given the state transition probability model, , such that the condition in (13) is satisfied with high probability for the physical system generating the data. Denote the one-step prediction from the model in (3), in closed loop with , as:

where represents the next state prediction and the time index is omitted.

Approximating the high-confidence prediction set.

A polytopic approximation of a high confidence region of the estimated uncertain set is obtained from the parameters of and used for training . In this work, the uncertain set is taken as a hyper-diamond centered at , scaled by the (diagonal) standard deviation matrix, :

(17)

where

is a hyper-parameter. This choice of set is inspired by the Unscented Kalman filter

(wan2000). Since is diagonal, the vertices are given by the columns of the matrix resulting from multiplying with a mask such that:

(18)

Learning the safe set.

Assume that a controller is given. Then, we wish to learn a of the from of (8), such that the corresponding safe set is as big as possible, ideally as big as the state constraints . In order to do so, the parameters of are trained using a grid of initial states, a forward model to simulate the next state under the policy , and an appropriate cost function. The cost for and is inspired by (richards_lyapunov_2018). It consists of a combination of two objectives: the first one penalises the deviation from the Lyapunov stability condition; the second one is a classification penalty that separates the stable points from the unstable ones by means of the decision boundary, . The combined robust Lyapunov function cost is:

(19)
(20)
(21)
(22)

where trades off stability for volume. The robust Lyapunov decrease in (22

) is evaluated by using sampling to account for uncertainty over the confidence interval

. Sampling of the set centre is performed as opposite of setting , which didn’t seem to produce valid results. Let us omit for ease of notation. We substitute with , which we define as:

(23)

Equations (22) and (23) require a maximisation of the non-convex function over the convex set . For the considered case, a sampling technique or another optimisation (similar to adversarial learning) could be used for a better approximation of the max operator. The maximum over is instead approximated by the maximum over its vertices:

(24)

This consists of a simple enumeration followed by a max over tensors that can be easily handled. Finally, during training (

23) is implemented in a variational inference fashion by evaluating (24

) at each epoch over a different sample of

. This entails a variational posterior over the center of the uncertainty interval. The approach is depicted in Figure 3.

The proposed cost is inspired by richards_lyapunov_2018, with the difference that here there is no need for labelling the states as safe by means of a multi-step simulation. Moreover, in this work we train the Lyapunov function and controller together, while in (richards_lyapunov_2018) the latter was given.

Learning the safe policy.

We alternate the minimisation of the Lyapunov loss (19) and the solution of the variational robust control problem:

(25)
(26)
Figure 3: Approximating the non-convex maximisation. Centre of the uncertain set is sampled and Lyapunov function is evaluated at its vertices.

subject to the forward model (3). In this work, (25) is solved using backpropagation through the policy, the model and . The safety constraint, , namely, is relaxed through a log-barrier (Boyd:2004:CO:993483). If a neural policy solves (25) and satisfies the safety constraint, , then it is a candidate robust controller for keeping the system within the safe set . Note that the expectation in (26) is once again treated as a variational approximation of the expectation over the center of the uncertainty interval.

Obtaining an exact solution to the control problem for all points is computationally impractical. In order to provide statistical guarantees of safety, probabilistic verification is used after and have been trained. This refines the safe level set and, if successful, provides a probabilistic safety certificate. If the verification is unsuccessful, then the learned are not safe. The data collection continues with the previous safe controller until suitable , , and are found. Note that the number of training points used for the safe set and controller is in general lower than the ones used for verification. The alternate learning procedure for and is summarised in Algorithm 1. The use of 1-step predictions makes the procedure highly scalable through parallelisation on GPU.

In: , , , , , ,
Out:
for  do
        for  do
               Adam step on (20)
       for  do
               Adam step on (25)
       
Algorithm 1 Alternate descent for safe set

Probabilistic safety verification.

A probabilistic verification is used to numerically prove the physical system stability with high probability. The resulting certificate is of the form (15), where the decreases with increasing number of samples. Following the work of bobiti_samplingdriven_nodate, the simulation is evaluated at a large set of points within the estimated safe set . Monte Carlo rejection sampling is performed with PyMC (Salvatier2016).

In: , , , , , , ,
Out:
for   do
        for  do
               draw uniform -samples s.t.:
                 draw -samples from if  then
                      draw uniform -samples s.t.:
                        if  then
                             return SAFE, ,
                     
              
       
Verification failed.
Algorithm 2 Probabilistic safety verification

In practical applications, several factors limit the convergence of the trajectory to a neighborhood of the target (the ultimate bound, Blanchini). For instance, the policy structural bias, discount factors in RL methods or persistent uncertainty in the model, the state estimates, and the physical system itself. Therefore, we extended the verification algorithm of (bobiti_samplingdriven_nodate) to estimate the ultimate bound as well as the invariant set, as outlined in Algorithm 2. Given a maximum and minimum level, , , we first sample initial states uniformly within these two levels and check for a robust decrease of over the next state distribution. If this is verified, then we sample uniformly from inside the minimum level set (where may not decrease) and check that does not exceed the maximum level over the next state distribution. The distribution is evaluated by means of uniform samples of , independent of the current state, within . These are then scaled using from the model. We search for , with a step .

Note that, in Algorithm 2, the uncertainty of the surrogate model is taken into account by sampling a single uncertainty realisation for the entire set of initial states. The values of will be then scaled using in the forward model. This step is computationally convenient but breaks the assumption that variables are drawn from a uniform distribution. We leave this to future work. In this paper, independent Gaussian uncertainty models are used and stability is verified directly on the environment. Note that probabilistic verification is expensive but necessary, as pathological cases could result in the training loss (19) for the safe set could converging to a local minima with a very small set. If this is the case then usually the forward model is not accurate enough or the uncertainty hyperparameter is too large. Note that Algorithm 2 is highly parallelizable.

5 Safe exploration

Once a verified safe set is found the environment can be controlled by means of a 1-step MPC with probabilistic stability (see Appendix). Consider the constraint , where and come from Algorithm 1 and from Algorithm 2. The Safe-MPC exploration strategy follows:

Safe-MPC for exploration.

For collecting new data, solve the following MPC problem:

(27)

where is the exploration hyperparameter, is the regulation or exploitation parameter and is the info-gain from the model, similar to (hafner_reliable_2018):

(28)

The full derivation of the problem and a probabilistic safety result are discussed in Appendix.

Alternate min-max optimization.

Problem (27

) is approximated using alternate descent. In particular, the maximization in the loss function over the uncertain future state

with respect to , given the current control candidate , is alternated with the minimization with respect to , given the current candidate . Adam (kingma_adam:_2014) is used for both steps.

6 Inverted pendulum example

The approach is demonstrated on an inverted pendulum, where the input is the angular torque and the states/outputs are the angular position and velocity of the pendulum. The aim is to collect data safely around the unstable equilibrium point (the origin). The system has a torque constraint that limits the controllable region. In particular, if the initial angle is greater than 60 degrees, then the torque is not sufficient to swing up the pendulum. In order to compare to the LQR, we choose a linear policy with a activation, meeting the torque constraints while preserving differentiability.

Safe set with known environment, comparison to LQR.

We first test the safe-net algorithm on the nominal pendulum model and compare the policy and the safe set with those from a standard LQR policy. Figure 4 shows the safe set at different stages of the algorithm, approaching the LQR.

     
     
     
     
      LQR
Figure 4: Inverted Pendulum. Safe set and controller with proposed method for known environment model. Initial set () is based on a unit circle plus the constraint . Contours show the function levels. Control gain gets closer to the LQR solution as iterations progress until circa , where the minimum of the Lyapunov loss (19) is achieved. The set and controller at iteration are closest to the LQR solution, which is optimal around the equilibrium in the unconstrained case. In order to maximise the chances of verification the optimal parameters are selected with a early stopping, namely when the Lyapunov loss reaches its minimum, resulting in .

Safe set with Bayesian model.

In order to test the proposed algorithms, the forward model is fitted on sequences of length for an increasing amount of data points ( to ). Data is collected in closed loop with the initial controller, , with different initial states. In particular, we perturb the initial state and control values with a random noise with standard deviations starting from, respectively, and and doubling each points. The only prior used is that the velocity is the derivative of the angular position (normalized to and ). The uncertainty bound was fixed to . The architecture was cross-validated from datapoints with a - split. The model with the best validation predictions as well as the largest safe set was used to generate the results in Figure 6.

(a) environment
(b) NCP-BRNN ( points).
Figure 5: Inverted pendulum verification. Nominal and robust safe sets are verified on the pendulum simulation using samples. We search for the largest stability region and the smallest ultimate bound of the solution. If a simulation is not available, then a two-level sampling on BRNN is performed.

The results demonstrate that the size of the safe set can improve with more data, provided that the model uncertainty decreases and the predictions have comparable accuracy. This motivates for exploration.

points
points
points
points
points
points
points
points
points
environment
Figure 6: Inverted pendulum safe set with Bayesian model. Surrogates are obtained with increasing amount of data. The initial state and input perturbation from the safe policy are drawn from Gaussians with standard deviation that doubles each points. Top: Mean predictions and uncertainty contours for the NCP-BRNN model. After points no further improvement is noticed. Bottom: Comparison of safe sets with surrogates and environment. Reducing the model uncertainty while maintaining a similar prediction accuracy leads to an increase in the safe set. After points no further benefits are noticed on the set which is consistent with the uncertainty estimates.

Verification on the environment.

The candidate Lyapunov function, safe level set, and robust control policy are formally verified through probabilistic sampling of the system state, according to Algorithm 2, where the simulation is used directly. The results for samples are shown in Figure 5. In particular, the computed level sets verify at the first attempt and no further search for sub-levels or ultimate bounds is needed.

    Semi-random exploration
          trials of steps
    Safe-MPC exploration
          trial of steps
Figure 7: Safe exploration. Comparison of a naive semi-random exploration strategy with the proposed Safe-MPC for exploration. The proposed algorithm has an efficient space coverage with safety guarantees.

Safe exploration.

Safe exploration is performed using the min-max approach in Section 5. For comparison, a semi-random exploration strategy is also used: if inside the safe set, the action magnitude is set to maximum torque and its sign is given by a random uniform variable once , then the safe policy is used. This does not provide any formal guarantees of safety as the value of could exceed the safe level, especially for very fast systems and large input signals. This is repeated for several trials in order to estimate the maximum reachable set within the safe set. The results are shown in Figure 7, where the semi-random strategy is used as a baseline and is compared to a single trial of the proposed safe-exploration algorithm. The area covered by our algorithm in a single trial of steps is about of that of the semi-random baseline over trials of steps. Extending the length of the trials did not significantly improve the baseline results. Despite being more conservative, our algorithm continues to explore safely indefinitely.

7 Conclusions

Preliminary results show that the SiMBL produces a Lyapunov function and a safe set using neural networks that are comparable with that of standard optimal control (LQR) and can account for state-dependant additive model uncertainty. A Bayesian RNN surrogate with NCP was proposed and trained for an inverted pendulum simulation. An alternate descent method was presented to jointly learn a Lyapunov function, a safe level set, and a stabilising control policy for the surrogate model with back-propagation. We demonstrated that adding data-points to the training set can increase the safe-set size provided that the model improves and its uncertainty decreases. To this end, an uncertainty prior from the previous model was added to the framework. The safe set was then formally verified through a novel probabilistic algorithm for ultimate bounds and used for safe data collection (exploration). A one-step safe MPC was proposed where the Lyapunov function provides the terminal cost and constraint to mimic an infinite horizon with high probability of recursive feasibility. Results show that the proposed safe-exploration strategy has better coverage than a naive policy which switches between random inputs and the safe policy.

References

Appendix A Robust optimal control for safe learning

Further detail is provided regarding robust and chance constrained control.

Chance-constrained and robust control.

Consider the problem of finding a controller and a function such that and:

(29)

where is given by the forward model (3), represents a probability and . This is a chance-constrained control problem (Cannon_book; yan_stochastic_2018). Since finding and that satisfy (29) requires solving a non-convex and also stochastic optimization, we approximate (29) with a min-max condition over a high-confidence interval, in the form of a convex set , as follows:

(30)

This is a robust control problem, which is still non-convex but deterministic. In the convex case, (30) can be satisfied by means of robust optimization (Ben-Tal; rawlingsMPC). By following this consideration, we frame the control problem as a non-convex min-max optimization.

Links to optimal control and intrinsic robustness.

To link our approach with optimal control and reinforcement learning, note that if the condition in (

13) is met with equality, then the controller and the Lyapunov function satisfy the Bellman equation (rawlingsMPC). Therefore, is optimal and is the value-function of the infinite horizon optimal control problem with stage loss . In practice, this condition is not met with exact equality. Nevertheless, the inequality in (13) guarantees by definition that the system controlled by is asymptotically stable (converges to ) and it has a degree of tolerance to uncertainty in the safe set (i.e. if the system is locally Lipschitz) (rawlingsMPC). Vice versa, infinite horizon optimal control with the considered cost produces a value function which is also a Lyapunov function and provides an intrinsic degree of robustness (rawlingsMPC).

Appendix B From robust MPC to safe exploration

Once a robust Lyapunov function and invariant set are found, the environment can be controlled by means of a one-step MPC with probabilistic safety guarantees.

One-step robust MPC.

Start by considering the following min-max 1-step MPC problem:

(31)

This is a non-convex min-max optimisation problem with hard non-convex constraints. Solving (31) is difficult, especially in real-time, but is in general possible if the constraints are feasible. This is true with a probability that depends from the verification procedure, the confidence level used in the procedures, as well as the probability of the model being correct.

Relaxed problem.

Solutions of (31) can be computed in real-time, to a degree of accuracy, by iterative convexification of the problem and the use of fast convex solvers. This is described in Appendix. For the purpose of this paper, we will consider the soft-constrained or relaxed problem:

(32)

once again subject to (3). It is assumed that a scalar, , exists such that the constraint can be enforced. For the sake of simplicity, problem (32) will be addressed using backpropagation, at the price of losing real-time guarantees.

Safe exploration.

For collecting new data, we modify the robust MPC problem as follows:

(33)

where is the exploration hyperparameter, is the regulation or exploitation parameter and is the info-gain from the model, similar to (hafner_reliable_2018):

(34)

Probabilistic Safety.

We study the feasibility and stability of the proposed scheme, following the framework of rawlings_mayne_paper; rawlingsMPC. In particular, if the MPC (31) is always feasible, and the terminal cost and terminal set satisfy (15) with probability , then the MPC (31) enjoys some intrinsic robustness properties. In other words, we should be able to control the physical system and come back to a neighborhood of the initial equilibrium point for any state in , the size of this neighborhood depending on the model accuracy. We assume a perfect solver is used and that the relaxed problems enforce the constraints exactly for a given .

For the exploration MPC to be safe, we wish to be able to find a satisfying the terminal constraint:

starting from the stochastic system:

We aim at a probabilistic result. First, recall that we truncate the distribution of,

, to a high confidence level z-score,

. Once again, we switch to a set-valued uncertainty representation as it is most convenient and provide a result that depends on the z-score . Assume known the probability of our model to be able to perform one step predictions, given the model , such that the real state increments are within the given confidence interval, , and define it as: . This probability can be estimated and improved using cross-validation, for instance by fine-tuning . It can also be increased with after the model training. This can however make the control search more challenging. Finally, since we use probabilistic verification, from (15) we have a probability of the terminal set to be invariant for the model with truncated distributions: , where is the one-step reachability set operator (Kerrigan:2000) computed using the model in closed loop with . Note that this probability is determined by the number of verification samples (bobiti_samplingdriven_nodate). Safety of the next state is determined by:

Theorem 1.

Given , the probability of (31-33) to be feasible (safe) at the next time step is:

(35)
(36)
(37)

It must be noticed that, whilst is constant, the size of will generally decrease for increasing as well as . The probability of any state to lead to safety in the next step is given by:

Theorem 2.

Given , the probability of (31-33) to be feasible (safe) at the next step is:

(38)
(39)

where denotes the one-step robust controllable set for the model (Kerrigan:2000). The size of the safe set is a key factor for a safe system. This depends also on the architecture of and as well as on the stage cost matrices and . A stage cost is not explicitly needed for the proposed approach, however, can be beneficial in terms of additional robustness and serves as a regularisation for .

Appendix C Network architecture for inverted pendulum

Forward Model

Recall the NPC-BRNN definition:

(40)
(41)
(42)

Partition the state as , where the former represents the angular position and the latter the velocity. They are normalised, respectively, to a range of and . We consider a of the form:

(43)

where

is a three-layer feed-forward neural network with

hidden units and activations in the two hidden layers. The final layer is linear. The first layer of is shared with the standard deviation network, , where it is then followed by one further hidden layer of units before the final sigmoid layer. The parameter is set to . The noise standard deviation, is passed through a softplus layer in order to maintain it positive and was initialised at by inverting the softplus. We used epochs for training, with a learning rate of 1E-4 and an horizon of . The sequences where all of samples, the number of sequences was increased by increments of and the batch size adjusted to have sequences of length . The target loss was initilisized as .

We point out that this architecture is quite general and has been positively tested on other applications, for instance a double pendulum or a joint-space robot model, with states partitioned accordingly.

Lyapunov function

The Lyapunov net consists of three fully-connected hidden layers with units and activations which are then followed by a final linear layer with outputs. These are then reshaped into a matrix, , of size and is evaluated as:

(44)

where is a hyperparameter and is a trainable scaling parameter which is passed through a softplus. The introduction of noticeably improved results. The prior function was set to keep . We used outer epochs for training and inner epochs for the updates of and , with a with learning rate 1E-3. We used a uniform grid of initial stastes as a single batch.

Exploration MPC

For demonstrative purposes we solved the Safe-MPC using Adam with epochs for the minimisation step and SGD with epochs for the maximisation step. The outer loop used iterations. The learning rates were set to, respectively, and 1E-4. The exploration factor, , was set to as well as the soft constraint factor, . The exploitation factor, , was set to .

Appendix D Considerations on model refinement and policies

Using neural networks present several advantages over other popular inference models: for instance, their scalability to high dimensional and to large amount of data, the ease of including physics-based priors and structure in the architecture and the possibility to learn over long sequences. At the same time, NNs require more data than other methods and no offer no formal guarantees. For the guarantees, we have considered a-posteriori probabilistic verification. For the larger amount of data, we have assumed that an initial controller exists (this is often the case) that can be used to safely collect as much data as we need.

Model refinement.

A substantial difficulty was encountered while trying to incrementally improve the results of the neural network with increasing amount data. In particular, as more data is collected, larger or more batches must be used. This implies that either the gradient computation or the number of backward passes performed per epoch is different from the previous model training. Consequentely, if a model is retrained entirely from scratch, then the final loss and the network parameters can be substantially different from the ones obtained in the previous trial. This might result in having a larger uncertainty than before in certain regions of the state space. If this is the case, then stabilising the model can become more difficult and the resulting safe set might be smaller. We have observed this pathology initially and have mitigated it by employing these particular steps: first, we use a sigmoid layer to effectively limit the maximum uncertainty to a known hyperparameter; second, we added a consistency loss that encourages the new model to have uncertainty smaller than the previous one over the (new) training set; third, we used rejection sampling for the background based on the uncertainty of the previous model, so that the NCP does not penalise previously known datapoints; finally, we stop the training loop as soon as the final loss of the previous model is exceeded. These ingredients have proven successful in reducing this pathology and, together with having training data with increasing variance, have provided that the uncertainty and safe set improve up to datapoints. After that, however, adding further datapoints has not improved the size of the safe set which has not reached its maximial possible size. We believe that closing the loop with exploration could improve on this result but are also going to investigate further alternatives.

Noticeably, Gal2016Improving remarked that improving their BNN model was not simple. They tried for instance to use a forgetting factor which was not successful and concluded that their best solution was to save only a fixed number of most recent trials. We believe this could not be sufficient for safe learning as the uncertain space needs to be explored. Future work will further address this topic, for instance, by retraining only part of the network, or possibly by exploring combinations of our approach with the ensemble approach used in max. Initial trials of the former seemed encouraging for deterministic models.

Training NNs as robust controllers.

In this paper, we have used a neural network policy for the computation of the safe controller. This choice was made fundamentally to compare the results with an LQR, which can successfully solve the example. Training policies with backpropagation is not an easy task in general. For more complex scenarios, we envisage two possible solutions: the first is to use evolutionary strategies (stanley2002evolving; salimans2017es) or other global optimisation methods to train the policy; the second is to use a robust MPC instead of a policy. Initial trials of the former seemed encouraging. The latter would result in a change of Algorithm 1, where the would be not learned but just evaluated point-wise through an MPC solver. Future work is going to investigate these alternatives.

Appendix E Related work

Robust and Stochastic MPC

Robust MPC can be formulated using several methods, for instance: min-max optimisation (Bemporad_minmax; Kerrigan2004; Raimondo2009), tube MPC (rawlingsMPC; Rakovic2012) or constraints restriction (richards_a._g._robust_2004) provide robust recursive feasibility given a known bounded uncertainty set. In tube MPC as well as in constraints restriction, the nominal cost is optimized while the constraints are restricted according to the uncertainty set estimate. This method can be more conservative but it does not require the maximization step. For non-linear systems, computing the required control invariant sets is generally challenging. Stochastic MPC approaches the control problem in a probabilistic way, either using expected or probabilistic constraints. For a broad review of MPC methods and theory one can refer to Maciejowski_book; Camacho2007; rawlingsMPC; Cannon_book; Gallieri2016; Borrelli_book; Rakovic2019.

Adaptive MPC

lorenzen_cannon presented an approach based on tube MPC for linear parameter varying systems using set membership estimation. In particular, the constraints and model parameter set estimates are updated in order to guarantee recursive feasibility. pozzoli_tustin_2019

used the Unscented Kalman Filter (UKF) to adapt online the last layer of a novel RNN architecture, the Tustin Net (TN), which was then used to successfully control a double pendulum though MPC. TN is a deterministic RNN which is related to the architecture used in this paper. A comparison of different network architectures, estimation and adaptation heuristics for neural MPC can be found, for instance, in

pozzoli.

Stability certification

bobiti_sampling-based_2016 proposed a grid-based deterministic verification method which relies on local Lipschitz bounds of the system dynamics. This approach requires knowledge of the model equations and it was extended to black-box simulations (bobiti_samplingdriven_nodate) using a probabilistic approach. We extend this framework by means of a check for ultimate boundedness and propose to use it with Bayesian models through uncertainty sampling.

MPC for Reinforcement Learning

Williams2017 presented an information-theoretical framework to solve a non-linear MPC in real-time using a neural network model and performed model-based RL on a race car scale-model with non-convex constraints. gros_towards_2019 used policy gradient methods to learn a classic robust MPC for linear systems.

Safe learning

Yang2015b; Yang2015 looked, respectively, at using GPs for risk sensitive and fault-tolerant MPC. vinogradska_gaussian_nodate proposed a quadrature method for computing invariant sets, stabilising and unrolling GP models for use in RL. berkenkamp_safe_2017 used the deterministic verification method of (bobiti_sampling-based_2016) on GP models and embedded it into an RL framework using approximate dynamic programming. The resulting policy has a high probability of safety. Akametalu2014 studied the reachable sets of GP models and proposed an iterative procedure to refine the stabilisable (safe) set as more data is collected. hewing_cautious_2017 reviewed uncertainty propagation methods and formulated a chance-constrained MPC for grey-box GP models. The approach was demonstrated on an autonomous racing example with non-linear constraints. Limon2017 presented an approach to learn a non-linear robust model predictive controller based on worst case bounding functions and Holder constant estimates from a non-parametric method. In particular, they use both the trajectories from an initial offline model for recursive feasibility as well as of an online refined model to compute the optimal loss. koller_learning-based_2018

provide high probability guarantees of feasibility for a GP-based MPC with Gaussian kernels. This is done using a closed-form exact Taylor expansion that results in the solution of a generalised eigenvalue problem per each step of the prediction horizon.

cheng_end--end_2019 complemented model-free RL methods (TRPO and DDPG) with a GP model based approach using a barrier function safety loss, the GP being refined online. chow_lyapunov-based_2018

developed a safe Q-learning variant for constrained Markov decision processes based on a state-action Lyapunov function. The Lyapunov function is shown to be equal to the value function for a safety constraint function, defined over a finite horizon. This is constructed by means of a linear programme.

chow_lyapunov-based_2019 extended this approach to policy gradient methods for continuous control. Two projection strategies have been proposed to map the policy into the space of functions that satisfy the Lyapunov stability condition. Papini2018SafelyEP proposed a policy gradient method for exploration with a statistical guarantee of increase of the value function. wabersich_probabilistic_2019 formulated a probabilistically safe method to project the action resulting from a model-free RL algorithm into a safe manifold. Their algorithm is based on results from chance constrained tube-MPC and makes use of a linear surrogate model. thananjeyan_safety_2019

approximated the model uncertainty using an ensemble of recurrent neural networks. Safety was approached by constraining the ensemble to be close to a set of successful demonstrations, for which a non-parametric distribution is trained. Thus, a stochastic constrained MPC is approximated by using a set of the ensemble models trajectories. The model rollouts are entirely independent. Under several assumptions the authors proved the system safety. These assumptions can be rarely met in practise, however, the authors demonstrated that the approach works practically on the control of a manipulator in non-convex constrained spaces with a low ensemble size.

Learning Lyapunov functions

verdier_formal_2017

used genetic programming to learn a polynomial control Lyapunov function for automatic synthesis of a continuous-time switching controller.

richards_lyapunov_2018 proposed an architecture and a learning method to obtain a Lyapunov neural network from labelled sequences of state-action pairs. Our Lyapunov loss function is inspired by richards_lyapunov_2018 but does not make use of labels nor of sequences longer than one step. These approaches were all demonstrated on an inverted pendulum simulation. taylor_episodic_2019 developed an episodic learning method to iteratively refine the derivative of a continuous-time Lyapunov function and improve an existing controller solving a QP. Their approach exploits a factorisation of the Lyapunov function derivatives based on feedback linearisation of robotic system models. They test the approach on a segway simulation.

Planning and value functions

POLO (lowrey_plan_2018) consists of a combination of online planning (MPC) and offline value function learning. The value function is then used as the terminal cost for the MPC, mimicking an infinite horizon. The result is that, as the value function estimation improves, one can, in theory, shorten the planning horizon and have a near-optimal solution. The authors demonstrated the approach using exact simulation models. This work is related to SiMBL, with the difference that our terminal cost is a Lyapunov function that can be used to certify safety.

Uncertain models for RL

PILCO (DeisenrothRT2011; deisenroth_gaussian_2015) used of GP models for model-based RL in an MPC framework that trades-off exploration and exploitation. Frigola2014VariationalGP formulated a variational GP state-space model for time series. Gal2016Improving showed that variational NNs with dropout can significantly outperform GP models, when used within PILCO, both in terms of performance as well as computation and scalability. chua_deep_2018 proposed the use of ensemble RNN models and an MPC-like strategy to distinguish between noise and model uncertainty. They plan over a finite horizon with each model and optimise the action using a cross-entropy method. kurutach_model-ensemble_2018 used a similar forward model for trust-region policy optimisation and showed significant improvement in sample efficiency with respect to both single model (no uncertainty) as well as model-free methods. MAX (max) used a similar ensemble of RNN models for efficient exploration, significantly outperforming baselines in terms of sample efficiency on a set of discrete and continuous control tasks. depeweg_learning_2016 trained a Bayesian neural network using the -divergence and demonstrated that this can outperform both variational networks, MLP and GP models when used for stochastic policy search over a gas turbine example with partial observability and bi-modal distributions. hafner_learning_2018 proposed to use an RNN with both deterministic and stochastic transition components together with a multi-step variational inference objective. Their framework predicts rewards directly from pixels. This differs from our approach as we don’t have deterministic states and consider only full state information. Carron2019 tested the use of a nested control scheme based on an internal feedback linearisation and an external chance-constrained offset free MPC. The MPC is based on nominal linear models and uses both a sparse GP disturbance model as well as a piece-wise constant offset which is estimted online via the Extended Kalman Fitler (EKF). The GP uncertainty is propagated through a first order Taylor expansion. The approach was tested on a robotic arm.

Acknowledgements

The authors are grateful to Christian Osendorfer, Boyan Beronov, Simone Pozzoli, Giorgio Giannone, Vojtech Micka, Sebastian East, David Alvarez, Timon Wili, Pierluca D’Oro, Wojciech Jaśkowski, Pranav Shyam, Mark Cannon and Andrea Carron for constructive discussions. We are also grateful to Felix Berkenkamp for the support given while experimenting with their safe learning tools. All of the code used for this paper was implemented from scratch by the authors using PyTorch. Finally, we thank everyone at NNAISENSE for contributing to a successful and inspiring R&D environment.