Approximate Robust Control of Uncertain Dynamical Systems

03/01/2019 ∙ by Edouard Leurent, et al. ∙ Inria Renault 0

This work studies the design of safe control policies for large-scale non-linear systems operating in uncertain environments. In such a case, the robust control framework is a principled approach to safety that aims to maximize the worst-case performance of a system. However, the resulting optimization problem is generally intractable for non-linear systems with continuous states. To overcome this issue, we introduce two tractable methods that are based either on sampling or on a conservative approximation of the robust objective. The proposed approaches are applied to the problem of autonomous driving.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Reinforcement Learning is a general framework that allows the optimal control of a Markov Decision Process with state space , action space , reward function and unknown transition dynamics by searching for the policy with maximal expected value of the total discounted reward :

(1)

where , , , is the discount factor and

denotes the set of probability measures over

.

Unfortunately, its application to real-world tasks has so far been limited by its considerable need for experiences. It is generally recognized (Sutton1990; Atkeson1997) that the most sample-efficient approach is the family of model-based methods which learn a nominal model of the environment dynamics that is leveraged for policy search:

(2)

One drawback of such methods is that they suffer from model bias; that is, they ignore the error between the learned dynamics and the real environment . It has been shown that model bias can dramatically degrade the policy performances (Schneider1997).

Model errors can instead be explicitly considered and expressed through an ambiguity set of all possible dynamics models. Such a set can be constructed from a history of observations by computing the confidence regions associated with the system identification process (Iyengar2005; Nilim2005; Dean2017; Maillard2017). In this work, we will consider ambiguity sets of parametrized deterministic dynamical systems whose unknown parameters lie in a compact set of .

In the optimal control framework, model uncertainty is handled by maximizing the expected performances with respect to unknown dynamics. In stark contrast, in real-world applications where failures may turn out very costly, the decision maker often prefers to minimize the risk of the policy, which can be defined with several metrics characterizing the distribution of the policy outcome (Garcia2015).

The robust control framework is a popular setting in which the risk of a policy is defined as the worst possible outcome realization among the ambiguity set, to guarantee a lower-bound performance of the robust policy when executed on the true model:

(3)

Robust optimization has been studied in the context of finite Markov Decision Processes (MDP) with uncertain parameters by Iyengar2005, Nilim2005 and Wiesemann2013. They show that the main results of Dynamic Programming can be extended to their robust counterparts only when the dynamics ambiguity set verifies certain rectangularity properties. In the control theory community, the robust control problem is mainly restricted to the context of linear dynamical systems with bounded uncertainty in the time or frequency domain, where the objective is to guarantee stability (e.g. -optimal control, see Basar1996) or performance (e.g. LQ optimal control theory, see Petersen2014). The existing nonlinear robust control approaches such as sliding mode control (Li2018), feedback linearization, backstepping, passivation and input-to-states stabilization (Khalil2015) are usually based on canonical representations of regulated dynamics and admit constructive numeric realizations for systems of rather low dimensions.

There have been few attempts of robust control of large-scale systems with both continuous states and non-linear dynamics, which is the focus of this paper. Our contribution is twofold. In section 2, we first consider a simpler case where the ambiguity set and action space are both finite and introduce a sampling-based planner that approximately maximizes the robust objective (3). In section 3, we move to continuous ambiguity sets and form a conservative relaxation of the robust policy evaluation problem using interval predictors. In section 4, we illustrate the benefits of both techniques (for discrete, versus continuous ) on a problem of tactical decision-making for autonomous driving.

2 Sampling-based planning

If the true dynamics model were known and the action-space finite, sampling-based algorithms could be used to perform approximate optimal planning. In order to generalize to the robust setting, we need to make the following assumption about the structure of the ambiguity set:

Assumption 1 (Structure).

The ambiguity set and the action space are discrete and finite:

(4)

We slightly abuse notation and denote .

Such a structure of the ambiguity set typically stems directly from expert knowledge of the problem at hand. In general, it is nonrectangular, which implies that the Robust Bellman Equation does not hold (Wiesemann2013). This prevents us from building on planners that implicitly use this property and generate trajectories step-by-step by picking promising successor states, such as MCTS (Coulom2006) or UCT (Kocsis2006). Instead, we turn to algorithms that perform optimistic sampling of entire sequences of actions and work directly at the leaves of the expanded tree (see, e.g. Bubeck2010). More precisely, we build on the work of Hren2008 on optimistic planning for deterministic dynamics, which we extend to the robust setting.

We use similar notations and consider the infinite look-ahead tree composed of all reachable states. Each node corresponds to a joint state associated with the different dynamics . The root starts at the current state, and all nodes have children, each corresponding to an action and associated with the successor joint state . We use the standard notations over alphabets to refer to nodes in as action sequences. Thus, a finite word of length represents the node obtained following the action sequence from the root. Sequences and can be concatenated as , the set of suffixes of is such that , and the empty sequence is .

The sample complexity is expressed in terms of number of expanded nodes. It is related to the number of calls to dynamics models: when a node is expanded, all successor states are computed for all actions and dynamics. At an iteration , we denote the tree of already expanded nodes, and the set of its leaves.

Definition

Fix a dynamics model . Hren2008 define for any node of depth the optimal value , its lower bound u-value and upper-bound b-value . These variables depend on the dynamics and will therefore be referred to with a superscript notation.

We extend these dynamics-dependent variables to the robust setting, using superscript in notations.

  • The robust value of a path as the restriction of (3) to policies that start with the action sequence :

    (5)

    By definition, the robust value of (3) is recovered at the root .

    Moreover, for we have

    (6)
  • The robust u-value of a leaf node is the worst-case discounted sum of rewards from the root to . It is then backed-up to the rest of the tree:

    (7)
  • Likewise, the robust b-value is defined at leaf nodes and backed-up to the rest of the tree:

    (8)

    An illustration of the computation of the robust b-values is presented in Figure 1.

    Figure 1: The computation of robust b-values in Algorithm 1. The simulation of trajectories for every dynamics model is represented as stacked versions of the expanded tree . Figure 2: A few trajectories are sampled from an initial state following a policy with various dynamics parameters (in black). The union of reachability sets is shown in green, and its interval hull in red.
Remark 1 (On the ordering of min and max).

In the definition of it is essential that the minimum among the models is only taken at the end of trajectories, in the same way as for the robust objective (3) in which the worst-case dynamics is only determined after the policy has been fully specified. Assume that is instead naively defined as:

This would not recover the robust policy, as we show in Figure 3 with a simple counter-example.

Figure 3: From left to right: two simple models and corresponding u-values with optimal sequences in blue; the naive version of the robust values returns sub-optimal paths in red; our robust u-value properly recovers the robust policy in green.

From these definitions we introduce Algorithm 1, and analyse its sample-efficiency in Theorem 1.

1 Initialize to a root and expand it. Set . while Numerical resource available do
2       Compute the robust u-values and robust b-values . Expand . n = n + 1
return
Algorithm 1 Deterministic Robust Optimistic Planning
Lemma 1 (Robust values ordering).

The robust values, u-values and b-values exhibit similar properties as the optimal values, u-values and b-values, that is: for all and ,

(9)
Proof.

This result stems directly from the definitions, see more details in Appendix A.1. ∎

The simple regret of the action returned by Algorithm 1 after rounds is defined as:

(10)

We will say that for some if there exist and such that for all . A node is said to be -optimal, in a robust sense, if and only if for some . The proportion of -optimal nodes at depth is then defined as s.t is -optimal. Further we will assume that for the graph the following hypothesis is satisfied:

Assumption 2 (Proportion of near-optimal nodes).

There exist , and such that for all and .

Theorem 1 (Regret bound).

Let . Then the simple regret of Algorithm 1 is:

(11)
(12)
Proof.

We use the properties shown in Lemma 1 and derive a robust counterpart of the proof of Hren2008, which we only modify slightly. See more details in Appendix A.2

3 Interval predictors

In this section, we assume that the ambiguity set is continuous and bounded.

In the robust objective (3), the operator only requires us to describe the set of states that can be reached with non-zero probability.

Definition

The reachability set at time is the set of all states that can be reached by starting from initial state and following a policy along the transition dynamics .

(13)

This set can still have a complex shape. We approximate it by an overset easier to represent and manipulate: its interval hull.

Definition

The interval hull of , denoted is the smallest interval containing it:

(14)

The max and min operators are applied element-wise. This set is illustrated in Figure 2.

State intervals have been used to describe the evolution of uncertain systems and derive feedback laws that achieve closed-loop stability in the presence of bounded disturbances (Stinga2012; Efimov2016; Dinh2017).

The main techniques of interval simulation have been listed and described in a survey by Puig2005

, in which they are sorted into two categories. Region-based methods use the estimate of

at previous timestep to bootstrap the current estimate at time . They are based on application of the theory of positive systems, which are frequently computationally efficient. However, the positive inclusion dynamics of a system may lead to overestimations of the true and even unstable behaviour. Trajectory-based methods estimate by taking the and in (14) over sampled trajectories for . These methods produce subset estimates of the true , do not suffer from the wrapping effect, but are often more computationally costly.

In this work, we leverage them to derive a proxy for the robust objective (3).

Definition

Let us denote the robust objective of equation (3) as .

We define the surrogate objective on a finite horizon as:

(15)
1 Algorithm robust_control()
2       Initialize a set of policies while resources available do
3             evaluate() each policy at current state Update by policy search
4       end while
5      return
6
1 Procedure evaluate(, )
2       Compute the state interval on a horizon Minimize over the intervals for all return
3
Algorithm 2 Interval-based Robust Control
Property 1 (Lower bound).

The surrogate objective is a lower bound of the true objective :

(16)
Proof.

By bounding the collected rewards by their minimum over . See Appendix A.3

The robust objective error stems from two terms: the interval approximation of the reachable set and the loss of time-dependency between the states within a single trajectory. If both these approximations are tight enough, maximizing the lower bound will increase the true objective , which is the idea behind Algorithm 2. It is classically structured as an alternation of a Policy Evaluation step , during which the surrogate objective is evaluated for a set of policies , and a Policy Search step which aims to steer the set of policies towards regions where the surrogate objective is maximal. The main Policy Search algorithms are listed in a survey by Deisenroth2011b. In this case, derivative-free methods such as evolutionary strategies (e.g. CMAES) would be more appropriate than policy gradient methods, since cannot be easily differentiated. Planning algorithms can also be used to exploit the dynamics and structure of the surrogate objective.

4 Experiments

Most autonomous driving architectures perform sequentially the prediction of other drivers’ trajectories and the planning of a collision-free path for the ego-vehicle. As a consequence, they fail to account for interactions between the traffic participants and the ego-vehicle, leading to overly conservative decisions and a lack of negotiation abilities (Trautman2010). In this work, we perform both tasks jointly to anticipate the effect of our own decisions on the dynamics of the nearby traffic. But human decisions are not fully predictable and cannot be reduced to a single deterministic model. To avoid model bias, we provide a whole ambiguity set of reasonable closed-loop behavioural models for other vehicles, and plan robustly with respect to this ambiguity.

We introduce a new environment for simulated highway driving and tactical decision-making.111Source code is available at https://github.com/eleurent/highway-env

(a) The possible trajectories (blue) for fixed behaviours and varying destinations
(b) The possible trajectories (green-red gradient) for fixed destination and varying behaviours
Figure 4: The highway-env environment. The ego-vehicle (green) is approaching a roundabout with flowing traffic (yellow).

Vehicle motion is described by the Kinematic Bicycle Model (see, e.g. Polack2017). They follow a lane keeping lateral behaviour, and a longitudinal behaviour inspired by the Intelligent Driver Model (Treiber2000) which balances reaching a desired velocity and respecting a safe time gap. The lane-change decisions are determined by the MOBIL model (Kesting2007): they must increase the vehicles accelerations while satisfying safe braking decelerations. The behaviour parameters of each traffic participant are sampled uniformly from a set .

The ego-vehicle can be controlled with a finite set of tactical decisions = {no-op, right-lane, left-lane, faster, slower} implemented by low-lever controllers. It is rewarded for driving fast along a planned route while avoiding collisions. More information on the environment modelling is provided in the appendices.

We carry out two experiments222Video and source code are available at https://eleurent.github.io/robust-control/: First, the behavioural parameters of traffic participants are fixed but their planned routes are unknown: we enumerate every direction they can take at their next intersection (see Figure 3(a)) and plan robustly with respect to this finite ambiguity set using Algorithm 1. Second, we assume on the contrary that the agents’ planned routes are known but not their behavioral parameters (see Figure 3(b)). We plan robustly with respect to this continuous ambiguity set using Algorithm 2. Crucially, the state intervals prediction is conditioned on the planned policy .

In both experiments, we compare the performance of the robust planner to an oracle model that has perfect knowledge of the systems dynamics, and to a nominal planner that plans optimistically with respect to a dynamics model sampled uniformly from the ambiguity set. Statistics are collected from 100 episodes with random environment initialization. Results are presented in Table 1.

Ambiguity set Agent Worst-case return Mean return std
True model Oracle
Discrete Nominal
Algorithm 1
Continuous Nominal
Algorithm 2
Table 1: Performances of robust planners on two ambiguous environments.

5 Conclusion

This paper has presented two methods for approximately solving the robust control problem. In the simpler case of finite ambiguity set and action space, we use optimistic planning and provide an upper bound for the simple regret. A direct consequence is that we recover the robust policy as the computational budget increases. In the general case, we use interval prediction to efficiently solve a conservative approximation of the robust objective while providing a lower bound for the performance of a policy when applied to the unknown true model. However, this method is lossy and does not enjoy asymptotic consistency. Both algorithms are flexible, allowing to handle a variety of parametrized dynamical systems, and practical, with a focus on computational efficiency. The two methods are also orthogonal, which means they can be combined to deal with complex ambiguity sets that display both continuous and discrete features, such as disjoint unions of connected sets.

Acknowledgments

This work has been supported by CPER Nord-Pas de Calais/FEDER DATA Advanced data science and technologies 2015-2020, the French Ministry of Higher Education and Research, INRIA, and the French Agence Nationale de la Recherche (ANR).

References

Appendix A Detailed proofs

a.1 Lemma 1

Proof.

By definition, when starting with sequence , the value represents the minimum admissible reward, while corresponds to the best admissible reward achievable with respect to the the possible continuations of . Thus, for all , and are non-decreasing functions of and and are a non-increasing functions of , while and do not depend on .

Moreover, since the reward function is assumed to have values in , the sum of discounted rewards from a node of depth is at most . As a consequence, for all , of depth , and any sequence of rewards obtained from following a path in with any dynamics :

That is equivalent to:

Hence,

(17)

And as the left-hand and right-hand sides of (17) are independent of the particular path that was followed in , it also holds for the robust path:

that is,

(18)

Finally, (18) is extended to the rest of by recursive application of (6), (7) and (8). ∎

a.2 Theorem 1

Proof.

Hren2008 first show in Theorem 2 that the simple regret of their optimistic planner is bounded by where is the depth of . This properties relies on the fact that the returned action belongs to the deepest explored branch, which we can show likewise by contradiction using Lemma 1. This yields directly that where is some node of maximal depth expanded at round , which by Algorithm 1 verifies and:

(19)

Secondly, they bound the depth of with respect to . To that end, they show that the expanded nodes always belong to the sub-tree of all the nodes of depth that are -optimal. Indeed, if a node of depth is expanded at round , then for all by Algorithm 1, thus the max-backups of (8) up to the root yield . Moreover, by Lemma 1 we have that and so , thus .

Then from Assumption 2 and the definition of applied to nodes in , there exists and such that the number of nodes of depth in is bounded by . As a consequence,

where .

  • If , then and thus . We conclude from (19) that .

  • If , then , hence from (19) we have .

a.3 Property 1

Proof.

For any , and any trajectory sampled from and ,

Hence,

And finally,

Appendix B Environment dynamics

b.1 Kinematics

The vehicles kinematics are represented by the Kinematic Bicycle Model:

(20)
(21)
(22)
(23)

where is the vehicle position, its forward velocity and its heading, is the vehicle half-length, is the acceleration command and is the slip angle at the center of gravity, used as a steering command.

Each vehicle is represented by its kinematics . The joint state is represented by

b.2 Longitudinal control

The acceleration control is assumed to be linearly parametrized:

(24)

where

is an uncertain weight vector, and

is a feature vector that depends on the joint state and considered vehicle .

It is composed of:

  • a target velocity seeking term,

  • a braking term to adjust velocity w.r.t. the front vehicle ,

  • a braking term to respect a safe distance w.r.t. the front vehicle.

Denoting the front vehicle preceding vehicle , is defined by

(25)

where is the negative part function and and respectively denote the speed limit, jam distance and time gap given by traffic rules.

We observe that this model exhibits similar qualitative behaviours to the IDM’s.

b.3 Lateral control

A non-linear lane-keeping controller is implemented as follows: a lane with lateral position and heading is tracked by performing

  1. Position control

    (26)
  2. Lateral velocity to heading conversion

    (27)
  3. Heading control

    (28)
  4. Heading rate to steering angle conversion

    (29)

Finally,

(30)

This non-linear controller presented in subsection can be linearised around its equilibrium .

(31)
(32)
(33)

with

(34)

and

(35)

b.4 Discrete behaviour

The MOBIL model [Kesting2007], which stands for Minimizing Overall Braking Induced by Lane Changes, is a discrete lateral decision model that formulates a criterion for lane changes in terms of safe braking decelerations and increased overall accelerations according to a longitudinal model.

It states that a lane change should be performed if and only if:

  1. It does not impose an unsafe braking on the target lane following vehicle:

    (36)
  2. It enables the vehicle and (with a politeness factor ) its following vehicles on both current and target lanes to increase their overall acceleration:

    (37)

This model describes changes in the target lane .

Appendix C Interval Predictor

In this section, we design an interval predictor for our system.

c.1 Notations

For any real variable , we denote an interval containing as , such that . As elements of , they can be scaled and offset by scalars. This definition is extended element-wise to vector variables.

Then, we define several operators over intervals and

  • The product operator

    (38)
    (39)

    where and are the projections onto and , respectively.

  • The difference operator

    (40)
  • The cosine and sine operators

    (41)
    (42)
    (43)
    (44)
  • The inverse operator over a positive interval

    (45)
  • Any other function is assumed increasing on the interval and is applied coefficient-wise

    (46)

We start with an initial estimate of the intervals over state variables and . Typically, we use zero-width intervals centred on the current state observation. Likewise, any variable used in place of an interval corresponds to the zero-width interval .

c.2 Intervals for features

We use (25) and (35) respectively to derive intervals for the features and from the intervals over the states.

We index the front vehicle intervals with the subscript

(47)

and

(48)

c.3 Intervals for controls

The controls intervals are derived from (24) and (33)

(49)
(50)

c.4 Intervals for velocity and heading

The velocity interval is derived from (22) and the heading interval from (23)

(51)
(52)

c.5 Intervals for positions

Likewise, the positions interval are derived from the kinematics (20) and (21)

(53)
(54)