Learning-based Model Predictive Control for Safe Exploration and Reinforcement Learning

03/22/2018 ∙ by Torsten Koller, et al. ∙ ETH Zurich 0

Learning-based methods have been successful in solving complex control tasks without significant prior knowledge about the system. However, these methods typically do not provide any safety guarantees, which prevents their use in safety-critical, real-world applications. In this paper, we present a learning-based model predictive control scheme that provides provable high-probability safety guarantees. To this end, we exploit regularity assumptions on the dynamics in terms of a Gaussian process prior to construct provably accurate confidence intervals on predicted trajectories. Unlike previous approaches, we do not assume that model uncertainties are independent. Based on these predictions, we guarantee that trajectories satisfy safety constraints. Moreover, we use a terminal set constraint to recursively guarantee the existence of safe control actions at every iteration. In our experiments, we show that the resulting algorithm can be used to safely and efficiently explore and learn about dynamic systems.

READ FULL TEXT VIEW PDF

Authors

page 1

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

In model-based reinforcement learning (RL,

[1]), we aim to learn the dynamics of an unknown system from data, and based on the model, derive a policy that optimizes the long-term behavior of the system. Crucial to the success of such methods is the ability to efficiently explore the state space in order to quickly improve our knowledge about the system. While empirically successful, current approaches often use exploratory actions during learning, which lead to unpredictable and possibly unsafe behavior of the system, e.g., in exploration approaches based on the optimism in the face of uncertainty principle [2]. Such approaches are not applicable to real-world safety-critical systems.

In this paper we introduce SafeMPC, a safe model predictive control (MPC) scheme that guarantees the existence of feasible return trajectories to a safe region of the state space at every time step with high-probability. These return trajectories are identified through a novel uncertainty propagation method that, in combination with constrained MPC, allows for formal safety guarantees in learning control.

Fig. 1: Propagation of uncertainty over multiple time steps based on a well-calibrated statistical model of the unknown system. We iteratively compute ellipsoidal over-approximations (purple) of the intractable image (green) of the learned model for uncertain ellipsoidal inputs.

Related Work

One area that has considered safety guarantees is robust MPC. There, we iteratively optimize the performance along finite-length trajectories at each time step, based on a known model that incorporates uncertainties and disturbances acting on the system [3]. In a constrained robust MPC setting, we optimize these local trajectories under additional state and control constraints. Safety is typically defined in terms of recursive feasibility and robust constraint satisfaction. In [4], this definition is used to safely control urban traffic flow, while [5] guarantees safety by switching between a standard and a safety mode. However, these methods are conservative since they do not update the model.

In contrast, learning-based control approaches adapt their models online based on observations of the system. This allows the controller to improve over time, given limited prior knowledge of the system. Theoretical safety guarantees in learning-based MPC (LBMPC) are established in [6]. A safety mechanism for general learning-based controllers using robust MPC is proposed in [7]. Both approaches require a known nominal linear model. The former approach requires deviations from the system dynamics to be bounded in an pre-specified polytope, the latter relies on sampling.

MPC based on Gaussian process (GP, [8]) models is proposed in a number of works, e.g. [9, 10]. The difficulty here is that trajectories have complex dependencies on states and unbounded stochastic uncertainties. Safety through probabilistic chance constraints is considered in [11, 12, 13] based on approximate uncertainty propagation. While often being empirically successful, these approaches do not theoretically guarantee safety of the underlying system.

Another area that has considered learning for control is model-based RL. There, we aim to learn global policies based on data-driven modeling techniques, e.g., by explicitly trading-off between finding locally optimal policies (exploitation) and learning the behavior of the system globally (exploration)  [1]. This results in data-efficient learning of policies in unknown systems [14]. In contrast to MPC, where we optimize finite-length trajectories, in RL we typically aim to find an infinite horizon optimal policy. Hence, enforcing hard constraints in RL is challenging. Control-theoretic safety properties such as Lyapunov stability or robust constraint satisfaction are only considered in a few works [15]. In [16], safety is guaranteed by optimizing parametric policies under stability constraints, while [17] guarantees safety in terms of constraint satisfaction through reachability analysis.

Our Contribution

We combine ideas from robust control and GP-based RL to design a MPC scheme that recursively guarantees the existence of a safety trajectory that satisfies the constraints of the system. In contrast to previous approaches, we use a novel uncertainty propagation technique that can reliably propagate the confidence intervals of a GP-model forward in time. We use results from statistical learning theory to guarantee that these trajectories contain the system with high probability jointly for all time steps. In combination with a constrained MPC approach and a terminal set constraint, we then prove the safety of the system. We apply the algorithm to safely explore the dynamics of an inverted pendulum simulation.

Ii Problem Statement

We consider a nonlinear, discrete-time dynamical system

(1)

where is the state and is the control input to the system at time step . We assume that we have access to a twice continuously differentiable prior model , which could be based on a first principles physics model. The model error is a priori

unknown and we use a statistical model to learn it by collecting observations from the system during operation. In order to provide guarantees, we need reliable estimates of the model-error. In general, this is impossible for arbitrary functions 

. We make the following additional regularity assumptions.

We assume that the model-error is of the form , a weighted sum of distances between inputs and representer points as defined through a symmetric, positive definite kernel . This class of functions is well-behaved in the sense that they form a reproducing kernel Hilbert space (RKHS, [18]) equipped with an inner-product . The induced norm is a measure of the complexity of a function . Consequently, the following assumption can be interpreted as a requirement on the smoothness of the model-error w.r.t. the kernel .

Assumption 1

The unknown function has bounded norm in the RKHS , induced by the continuously differentiable kernel , i.e. .

In the case of a multi-dimensional output , we follow [19] and redefine as a single-output function such that and assume that .

We further assume that the system is subject to polytopic state and control constraints

(2)
(3)

which are bounded. For example, in an autonomous driving scenario, the state region could correspond to a highway lane and the control constraints could represent the physical limits on acceleration and steering angle of the car.

Lastly, we assume access to a backup controller that guarantees that we remain inside a given safe subset of the state space once we enter it. In the autonomous driving example, this could be a simple linear controller that stabilizes the car in a small region in the center of the lane at slow speeds.

Assumption 2

We are given a controller and a polytopic safe region

(4)

which is (robust) control positive invariant (RCPI) under . Moreover, the controller satisfies the control constraints inside , i.e. .

This assumption allows us to gather initial data from the system inside the safe region even in the presence of significant model errors, since the system remains safe under the controller . Moreover, we can still guarantee constraint satisfaction asymptotically outside of , if we can show that a finite sequence of control inputs eventually steers the system back to the safe set . This idea and a similar definition of a safe set was introduced concurrently in [7]. A set and corresponding controller which fulfill creftype 2 for general dynamical systems is difficult to find. However, there has been recent progress in finding stability regions for systems of the form creftype 1, which are RCPI by design, that could, under additional considerations (e.g. through polytopic inner-approximations  [20]), satisfy the assumptions.

Given a controller , ideally we want to enforce the state- and control constraints at every time step,

(5)

where denotes the closed-loop system under . Apart from , which trivially and conservatively fulfills this, it is in general impossible to design a controller that enforces creftype 5 without additional assumptions. Instead, we slightly relax this requirement to safety with high probability throughout its operation time.

Definition 1

Let be a controller for creftype 1 with the corresponding closed-loop system . Let and . A system is safe under the controller iff:

(6)

Based on Definition 1, the goal is to design a control scheme that guarantees -safety of the system creftype 1. At the same time, we want to improve our model by learning from observations collected outside of the initial safe set during operation, which increase the performance of the controller over time.

Iii Background

In this section, we introduce the necessary background on GPs and set-theoretic properties of ellipsoids that we use to model our system and perform multi-step ahead predictions.

Iii-a Gaussian Processes (GPs)

We want to learn the unknown model-error from data using a GP model. A is a distribution over functions, which is fully specified through a mean function  and a covariance function , where . Given a set of noisy observations , we choose a zero-mean prior on as and regard the differences between prior model and observed system response at input locations . The posterior distribution at  is then given as a Gaussian

with mean and variance

(7)
(8)

where , and is the

dimensional identity matrix. In the case of multiple outputs

, we model each output dimension with an independent GP, . We then redefine creftype 7 and creftype 8 as and corresponding to the predictive mean and variance functions of the individual models.

Based on creftype 1, we can use GPs to model the unknown part of the system creftype 1, which provides us with reliable confidence intervals on the model-error .

Lemma 1

[16, Lemma 2]: Assume and that measurements are corrupted by -sub-Gaussian noise. Let , where is the information capacity associated with the kernel . Then with probability at least we have for all that .

In combination with the prior model , this allows us to construct reliable confidence intervals around the true dynamics of the system creftype 1. The scaling depends on the number of data points  that we gather from the system through the information capacity, , i.e. the maximum mutual information between a finite set of samples and the function . Exact evaluation of is NP-hard in general, but it can be greedily approximated and has sublinear dependence on for many commonly used kernels [21].

The regularity assumption creftype 1 on our model-error and the smoothness assumption on the covariance function additionally imply that the function is Lipschitz.

Iii-B Ellipsoids

We use ellipsoids to give an outer bound on the uncertainty of our system when making multi-step ahead predictions. Due to appealing geometric properties, ellipsoids are widely used in the robust control community to compute reachable sets [22, 23]. These sets intuitively provide an outer approximation on the next state of a system considering all possible realizations of uncertainties when applying a controller to the system at a given set-valued input. We briefly review some of these properties and refer to [24] for an exhaustive introduction to ellipsoids and to the derivations for the following properties.

We use the basic definition of an ellipsoid,

(9)

with center and a symmetric positive definite (s.p.d) shape matrix . Ellipsoids are invariant under affine subspace transformations such that for with full row rank and , we have that

(10)

The Minkowski sum , i.e. the pointwise sum between two arbitrary ellipsoids, is in general not an ellipsoid anymore, but we have that

(11)

for all . Moreover, the minimizer of the trace of the resulting shape matrix is analytically given as . A particular problem that we encounter is finding the maximum distance to the center of an ellipsoid under a special transformation, i.e.

(12)

where

with full column rank. This is a generalized eigenvalue problem of the pair

and the optimizer is given as the square-root of the largest generalized eigenvalue.

Iv Safe Model Predictive Control

In this section, we use the assumptions in Sec. II to design a control scheme that fulfills our safety requirements in Definition 1. We construct reliable, multi-step ahead predictions based on our GP model and use MPC to actively optimize over these predicted trajectories under safety constraints. Using creftype 2, we use a terminal set constraint to theoretically prove the safety of our method.

Iv-a Multi-step Ahead Predictions

From Lemma 1 and our prior model , we directly obtain high-probability confidence intervals on  uniformly for all . We extend this to over-approximate the system after a sequence of inputs . The result is a sequence of set-valued confidence regions that contain the true dynamics of the system with high probability.

One-step ahead predictions

We compute an ellipsoidal confidence region that contains the next state of the system with high probability when applying a control input, given that the current state is contained in an ellipsoid. In order to approximate the system, we linearize our prior model and use the affine transformation property creftype 10 to compute the ellipsoidal next state of the linearized model. Next, we approximate the unknown model-error using the confidence intervals of our GP model. We finally apply Lipschitz arguments to outer-bound the approximation errors. We sum up these individual approximations, which result in an ellipsoidal approximation of the next state of the system. This is illustrated in Fig. 2. We formally derive the necessary equations in the following paragraphs. The reader may choose to skip the technical details of these approximations, which result in Lemma 2.

Fig. 2: Decomposition of the over-approximated image of the system creftype 1 under an ellipsoidal input . The exact, unknown image of (right, green area) is approximated by the linearized model (center, top) and the remainder term , which accounts for the confidence interval and the linearization errors of the approximation (center, bottom). The resulting ellipsoid is given by the Minkowski sum of the two individual approximations.

We first regard the system in creftype 1 for a single input vector . We linearly approximate around via

(13)

where is the Jacobian of at .

Next, we use the Lagrangian remainder theorem [25] on the linearization of and apply a continuity argument on our locally constant approximation of . This results in an upper-bound on the approximation error,

(14)

where  is the th component of , , is the Lipschitz constant of the gradient , and is the Lipschitz constant of , which exists by LABEL:gp:lemma:lipschitz.

The function  depends on the unknown model error . We approximate  with the statistical GP model, . From Lemma 1 we have

(15)

with high probability. We combine creftype 14 and creftype 15 to obtain

(16)

where and We can interpret creftype 16 as the edges of the confidence hyper-rectangle

(17)

where and we use the shorthand notation .

We are now ready to compute a confidence region based on an ellipsoidal state and a fixed input , by over-approximating the output of the system for ellipsoidal inputs . Here, we choose as the linearization center of the state and choose , i.e. . Since the function is affine, we can make use of creftype 10 to compute

(18)

resulting again in an ellipsoid. This is visualized in Fig. 2 by the upper ellipsoid in the center. To upper-bound the confidence hyper-rectangle on the right hand side of creftype 17, we upper-bound the term  by

(19)

which leads to

(20)

Due to our choice of , we have that and we can use creftype 12 to get which corresponds to the largest eigenvalue of . Using creftype 19, we can now over-approximate the right side of creftype 17 for inputs by an ellipsoid

(21)

where we obtain by over-approximating the hyper-rectangle  with the ellipsoid  through . This is illustrated in Fig. 2 by the lower ellipsoid in the center. Combining the previous results, we can compute the final over-approximation using creftype 11,

(22)

Since we carefully incorporated all approximation errors and extended the confidence intervals around our model predictions to set-valued inputs, we get the following generalization of Lemma 1.

Lemma 2

Let and choose as in Lemma 1. Then, with probability greater than , we have that:

(23)

uniformly for all .

Proof

Define . From Lemma 1 we have that, with high probability, . Due to the over-approximations, we have .

Lemma 2 allows us to compute confidence ellipsoid around the next state of the system, given that the current state of the system is given through an ellipsoidal belief.

Multi-step ahead predictions

We now use the previous results to compute a sequence of ellipsoids that contain a trajectory of the system with high-probability, by iteratively applying the one-step ahead predictions creftype 22.

Given an initial ellipsoid and control input , we iteratively compute confidence ellipsoids as

(24)

We can directly apply Lemma 2 to get the following result.

Corollary 1

Let and choose as in Lemma 1. Choose . Then the following holds jointly for all with probability at least : , where , is computed as in creftype 24 and is the state of the system creftype 1 at time step .

Proof

Since Lemma 2 holds uniformly for all ellipsoids and this is a special case that holds uniformly for all control inputs and for all ellipsoids obtained through creftype 24.

Corollary 1 guarantees that, with high probability, the system is always contained in the propagated ellipsoids creftype 24. Thus, if we provide safety guarantees for these sequences of ellipsoids, we obtain high-probability safety guarantees for the system  creftype 1.

Predictions under state-feedback control laws

When applying multi-step ahead predictions under a sequence of feed-forward inputs , the individual sets of the corresponding reachability sequence can quickly grow unreasonably large. This is because these open loop input sequences do not account for future control inputs that could correct deviations from the model predictions. Hence, we extend creftype 22 to affine state-feedback control laws of the form

(25)

where is a feedback matrix and is the open-loop input. The parameter is determined through the center of the current ellipsoid . Given an appropriate choice of , the control law actively contracts the ellipsoids towards their center. Similar to the derivations creftype 13-creftype 22, we can compute the function for affine feedback controllers creftype 25 and ellipsoids . The resulting ellipsoid is

(26)

where and . The set is obtained similarly to creftype 19 as the ellipsoidal over-approximation of

(27)

with and . The theoretical results of Lemma 2 and Corollary 1 directly apply to the case of the uncertainty propagation technique creftype 26.

Iv-B Safety constraints

The derived multi-step ahead prediction technique provides a sequence of ellipsoidal confidence regions around trajectories of the true system through Corollary 1. We can guarantee that the system is safe by verifying that the computed confidence ellipsoids are contained inside the polytopic constraints creftype 2 and creftype 3. That is, given a sequence of feedback controllers we need to verify

(28)

where is given through creftype 24.

Since our constraints are polytopes, we have that , where is the th row of . We can now formulate the state constraints through the condition as individual constraints , for which an analytical formulation exists [26],

(29)

Moreover, we can use the fact that is affine in  to obtain , using creftype 10. The corresponding control constraint is then equivalently given by

(30)

Iv-C The SafeMPC algorithm

Based on the previous results, we formulate a MPC scheme that optimizes the long-term performance of our system, while satisfying the safety condition in Definition 1: equationparentequation

(31a)
subject to (31b)
(31c)
(31d)
(31e)

where is the current state of the system and the intermediate state and control constraints are defined in creftype 29, creftype 30. The terminal set constraint has the same form as creftype 29 and can be formulated accordingly. The objective can be chosen to suit the given control task.

Due to the terminal constraint , a solution to creftype 31 provides a sequence of feedback controllers that steer the system back to the safe set . We cannot directly show that a solution to MPC problem creftype 31 exists at every time step (this property is known as recursive feasibility) without imposing additional assumption, e.g. on the safety controller . However, employing a control scheme similar to standard robust MPC, we guarantee that such a sequence of feedback controllers exists at every time step as follows: Given a feasible solution to creftype 31 at time , we apply the first feed-back control . In case we do not find a feasible solution to creftype 31 at the next time step, we shift the previous solution in a receding horizon fashion and append to the sequence to obtain . We repeat this process until a new feasible solution exists that replaces the previous input sequence. This procedure is summarized in Algorithm 1. We now state the main result of the paper that guarantees the safety of our system under the proposed algorithm.

Theorem 2

Let be the controller defined through algorithm Algorithm 1 and . Then the system creftype 1 is safe under the controller .

Proof

From Corollary 1, the ellipsoidal outer approximations (and by design of the MPC problem, also the constraints creftype 2) hold uniformly with high probability for all closed-loop systems , where is a feasible solution to creftype 31, over the corresponding time horizon . Hence we can show uniform high probability safety by induction. Base case: If creftype 31 is infeasible, we are -safe using the backup controller of creftype 2, since . Otherwise the controller returned from creftype 31 is -safe as a consequence of Corollary 1 and the terminal set constraint that leads to . Induction step: let the previous controller be -safe. At time step , if creftype 31 is infeasible then leads to a state , from which the backup-controller is -safe by creftype 2. If creftype 31 is feasible, then the return path is -safe by Corollary 1.

1:Input: Safe policy , dynamics model , statistical model .
2: with
3:for  do
4:      objective from high-level planner
5:     feasible, solve MPC problem creftype 31
6:     if feasible then:
7:     else:      
8:      apply to the system creftype 1
Algorithm 1 Safe Model Predictive Control (SafeMPC)

Iv-D Optimizing long-term behavior

While the proposed MPC problem creftype 31 yields a safe return strategy, we are often interested in a controller that optimizes performance over a possibly much longer horizon. In the autonomous driving example, a safety trajectory that stabilizes the car towards the center of the lane can be much shorter than for planning a steering maneuver before entering a turn. We hence propose to simultaneously plan a performance trajectory under a sequence of inputs using a performance-model along with the return strategy that we obtain when solving creftype 31. We do not make any assumptions on the performance model which could be given by one of the approximate uncertainty propagation methods proposed in the literature (see, e.g. [11] for an overview). In order to maintain the safety of our system, we enforce that the first controls are the same for both trajectories, i.e. we have that . This extended MPC problem is

(32)
subject to

where we replace creftype 31 with this problem in Algorithm 1. The safety guarantees of Theorem 2 directly translate to this setting, since we can always fall back to the return strategy.

Iv-E Discussion

Algorithm Algorithm 1 theoretically guarantees that the system remains safe, while actively optimizing for performance via the MPC problem creftype 32. This problem can be solved by commonly used, nonlinear programming (NLP) solvers, such as the Interior Point OPTimizer (Ipopt, [27]). Due to the solution of the eigenvalue problem creftype 12 that is required to compute creftype 22, our uncertainty propagation scheme is not analytic. However, we can still obtain exact function values and derivative information by means of algorithmic differentiation, which is at the core of many state-of-the-art optimization software libraries [28].

One way to further reduce the conservatism of the multi-step ahead predictions is to linearize the GP mean prediction , which we omitted for clarity.

V Experiments

In this section, we evaluate the proposed SafeMPC algorithm to safely explore the dynamics of an inverted pendulum system.

Fig. 3: Visualization of the samples acquired in the static exploration setting in Sec. V-A for . The algorithm plans informative paths to the safe set (red polytope in the center). The baseline sample set for (left) is dense around origin of the system. For (center) we get the optimal trade-off between cautiousness due to a long horizon and limited length of the return trajectory due to a short horizon. The exploration for (right) is too cautious, since the propagated uncertainty at the final state is too large.

The continuous-time dynamics of the pendulum are given by , where and are the mass and length of the pendulum, respectively, is a friction parameter, and is the gravitational constant. The state of the system consists of the angle and angular velocity of the pendulum. The system is controlled by a torque  that is applied to the pendulum. The origin of the system corresponds to the pendulum standing upright.

The system is underactuated with control constraints . Due to these limits, the pendulum becomes unstable and falls down beyond a certain angle. We do not impose state constraints, . However the terminal set constraint creftype 31e of the MPC problem creftype 31 acts as a stability constraint and prevents the pendulum from falling. Apart from being smooth, we do not make any assumptions on our prior model and we choose it to be a linearized and discretized approximation to the true system with a lower mass and neglected friction as in  [16]. The safety controller  is a discrete-time, infinite horizon linear quadratic regulator (LQR,[29]) of the approximated system with cost matrices , . The corresponding safety region is given by a conservative polytopic inner-approximation of the true region of attraction of

. We use the same mixture of linear and Matérn kernel functions for both output dimensions, albeit with different hyperparameters. We initially train our model with a dataset

sampled inside the safe set using the backup controller . That is, we gather initial samples with and observed next states . The theoretical choice of the scaling parameter for the confidence intervals in Lemma 1 can be conservative and we choose a fixed value of instead, following [16].

We aim to iteratively collect the most informative samples of the system, while preserving its safety. To evaluate the exploration performance, we use the mutual information between the collected samples and the GP prior on the unknown model-error , which can be computed in closed-form [21].

V-a Static Exploration

For a first experiment, we assume that the system is static, so that we can reset the system to an arbitrary state in every iteration. In the static case and without terminal set constraints, a provably close-to-optimal exploration strategy is to, at each iteration , select state-action pair 

with the largest predictive standard deviation 

[21]

(33)

where is the predictive variance creftype 8 of the th at the th iteration. Inspired by this, at each iteration we collect samples by solving the MPC problem creftype 31 with cost function , where we additionally optimize over the initial state . Hence, we visit high-uncertainty states, but only allow for state-action pairs that are part of a feasible return trajectory to the safe set .

Since optimizing the initial state is highly non-convex, we solve the problem iteratively with random initializations to obtain a good approximation of the global minimizer. After every iteration, we update the sample set , collect an observation and update the GP models. We apply this procedure for varying horizon lengths.

The resulting sample sets are visualized for varying horizon lengths with iterations in Fig. 3, while Fig. 4 shows how the mutual information of the sample sets for the different values of . For short time horizons (), the algorithm can only slowly explore, since it can only move one step outside of the safe set. This is also reflected in the mutual information gained, which levels off quickly. For a horizon length of , the algorithm is able to explore a larger part of the state-space, which means that more information is gained. For larger horizons, the predictive uncertainty of the final state is too large to explore effectively, which slows down exploration initially, when we do not have much information about our system. The results suggest that our approach could further benefit from adaptively choosing the horizon during operation, e.g. by employing a variable horizon MPC approach [30], or by increasing the horizon when the mutual information saturates for the current horizon.

Fig. 4: Mutual information for horizon lengths . Exploration settings with shorter horizon gather more informative samples at the beginning, but less informative samples in the long run. Longer horizon lengths result in less informative samples at the beginning, due to uncertainties being propagated over long horizons. However, after having gathered some knowledge they quickly outperform the smaller horizon settings. The best trade off is found for .

V-B Dynamic Exploration

As a second experiment, we collect informative samples during operation; without resetting the system at every iteration. Starting at , we apply the SafeMPC, Algorithm 1, over iterations. We consider two settings. In the first, we solve the MPC problem creftype 31 with given by creftype 33, similar to the previous experiments. In the second setting, we additionally plan a performance trajectory as proposed in Sec. IV-D. We define the states of the performance trajectory as Gaussians and the next state is given by the predictive mean and variance of the current state and applied action . That is,  with

where and . This simple approximation technique is known as mean-equivalent uncertainty propagation. We define the cost-function , which maximizes the sum of predictive confidence intervals along the trajectory , while penalizing deviation from the safety trajectory. We choose  in the problem creftype 32, i.e. the first action of the safety trajectory and performance trajectory are the same. As in the static setting, we update our GP models after every iteration.

We evaluate both settings for varying and fixed in terms of their mutual information in Fig. 5. We observe a similar behavior as in the static exploration experiments and get the best exploration performance for with a slight degradation of performance for . We can see that, except for

, the performance trajectory decomposition setting consistently outperforms the standard setting. Planning a performance trajectory (green) provides the algorithm with an additional degree of freedom, which leads to drastically improved exploration performance.

Fig. 5: Comparison of the information gathered from the system after iterations for the standard setting (blue) and the setting where we plan an additional performance trajectory (green).

Vi Conclusion

We introduced SafeMPC, a learning-based MPC scheme that can safely explore partially unknown systems. The algorithm is based on a novel uncertainty propagation technique that uses a reliable statistical model of the system. As we gather more data from the system and update our statistical mode, the model becomes more accurate and control performance improves, all while maintaining safety guarantees throughout the learning process.

References

  • [1] R. S. Sutton and A. G. Barto, “Reinforcement Learning: An Introduction,”

    IEEE Transactions on Neural Networks

    , vol. 9, no. 5, pp. 1054–1054, 1998.
  • [2] C. Xie, S. Patil, T. Moldovan, S. Levine, and P. Abbeel, “Model-based reinforcement learning with parametrized physical models and optimism-driven exploration,” in Proc. of the IEEE International Conference on Robotics and Automation (ICRA), 2016, pp. 504–511.
  • [3] J. B. Rawlings and D. Q. Mayne, Model Predictive Control: Theory and Design.   Nob Hill Pub., 2009.
  • [4] S. Sadraddini and C. Belta, “A provably correct MPC approach to safety control of urban traffic networks,” in American Control Conference (ACC), 2016, pp. 1679–1684.
  • [5] J. M. Carson, B. Açıkmeşe, R. M. Murray, and D. G. MacMartin, “A robust model predictive control algorithm augmented with a reactive safety mode,” Automatica, vol. 49, no. 5, pp. 1251–1260, 2013.
  • [6] A. Aswani, H. Gonzalez, S. S. Sastry, and C. Tomlin, “Provably safe and robust learning-based model predictive control,” Automatica, vol. 49, no. 5, pp. 1216–1226, 2013.
  • [7] K. P. Wabersich and M. N. Zeilinger, “Linear model predictive safety certification for learning-based control,” in Proc. of the Conference on Decision and Control (CDC), 2018.
  • [8] C. E. Rasmussen and C. K. Williams,

    Gaussian Processes for Machine Learning.

       MIT Press, Cambridge MA, 2006.
  • [9] J. Kocijan, R. Murray-Smith, C. E. Rasmussen, and A. Girard, “Gaussian process model based predictive control,” in Proc. of the American Control Conference (ACC), vol. 3, 2004, pp. 2214–2219.
  • [10] G. Cao, E. M.-K. Lai, and F. Alam, “Gaussian process model predictive control of an unmanned quadrotor,” Journal of Intelligent & Robotic Systems, vol. 88, no. 1, pp. 147–162, 2017.
  • [11] L. Hewing, A. Liniger, and M. N. Zeilinger, “Cautious NMPC with gaussian process dynamics for autonomous miniature race cars,” in In Proc. of the European Control Conference (ECC), 2018.
  • [12] A. Jain, T. X. Nghiem, M. Morari, and R. Mangharam, “Learning and control using Gaussian processes: Towards bridging machine learning and controls for physical systems,” in Proc. of the International Conference on Cyber-Physical Systems, 2018, pp. 140–149.
  • [13] C. J. Ostafew, A. P. Schoellig, and T. D. Barfoot, “Robust constrained learning-based NMPC enabling reliable mobile robot path tracking,” The International Journal of Robotics Research, vol. 35, no. 13, pp. 1547–1563, 2016.
  • [14] M. P. Deisenroth and C. E. Rasmussen, “PILCO: A model-based and data-efficient approach to policy search,” in Proc. of the International Conference on Machine Learning, 2011, pp. 465–472.
  • [15] D. Ernst, M. Glavic, F. Capitanescu, and L. Wehenkel, “Reinforcement learning versus model predictive control: A comparison on a power system problem,” In IEEE Transactions on Systems, Man, and Cybernetics, vol. 39, no. 2, pp. 517–529, 2009.
  • [16] F. Berkenkamp, M. Turchetta, A. P. Schoellig, and A. Krause, “Safe model-based reinforcement learning with stability guarantees,” Proc. of Neural Information Processing Systems (NIPS), vol. 1705, 2017.
  • [17] A. K. Akametalu, J. F. Fisac, J. H. Gillula, S. Kaynama, M. N. Zeilinger, and C. J. Tomlin, “Reachability-based safe learning with Gaussian processes,” in In Proc. of the IEEE Conference on Decision and Control (CDC), 2014, pp. 1424–1431.
  • [18] G. Wahba, Spline Models for Observational Data.   Siam, 1990, vol. 59.
  • [19] F. Berkenkamp, A. Krause, and A. P. Schoellig, “Bayesian optimization with safety constraints: Safe and automatic parameter tuning in robotics,” arXiv:1602.04450 [cs], 2016.
  • [20] E. M. Bronstein, “Approximation of convex sets by polytopes,” Journal of Mathematical Sciences, vol. 153, no. 6, pp. 727–762, 2008.
  • [21] N. Srinivas, A. Krause, S. Kakade, and M. Seeger, “Gaussian process optimization in the bandit setting: No regret and experimental design,” in Proc. of the International Conference on Machine Learning (ICML), 2010, pp. 1015–1022.
  • [22] T. F. Filippova, “Ellipsoidal estimates of reachable sets for control systems with nonlinear terms,” Proc. of the International Federation of Automatic Control (IFAC), vol. 50, no. 1, pp. 15 355–15 360, 2017.
  • [23] L. Asselborn, D. Gross, and O. Stursberg, “Control of uncertain nonlinear systems using ellipsoidal reachability calculus,” Proc. of the International Federation of Automatic Control (IFAC), vol. 46, no. 23, pp. 50–55, 2013.
  • [24] A. B. Kurzhanskii and I. Vályi, Ellipsoidal Calculus for Estimation and Control.   Boston, MA : Birkhäuser, 1997.
  • [25] L. Breiman and A. Cutler, “A deterministic algorithm for global optimization,” Mathematical Programming, vol. 58, no. 1-3, pp. 179–199, 1993.
  • [26] D. H. van Hessem and O. H. Bosgra, “Closed-loop stochastic dynamic process optimization under input and state constraints,” in Proc. of the American Control Conference (ACC), vol. 3, 2002, pp. 2023–2028.
  • [27] A. Wächter and L. T. Biegler, “On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming,” Mathematical Programming, vol. 106, no. 1, pp. 25–57, 2006.
  • [28] J. Andersson, “A general-purpose software framework for dynamic optimization,” PhD Thesis, Arenberg Doctoral School, KU Leuven, Leuven, Belgium, 2013.
  • [29] H. Kwakernaak and R. Sivan, Linear Optimal Control Systems.   Wiley-interscience New York, 1972, vol. 1.
  • [30] Richards Arthur and How Jonathan P., “Robust variable horizon model predictive control for vehicle maneuvering,” International Journal of Robust and Nonlinear Control, vol. 16, no. 7, pp. 333–351, 2006.