The usefulness of learning from demonstrations has been well-argued in the literature (see [2, 4, 41]). In the context of control, imagine that we need to design a controller for an autonomous car that prioritizes comfort of its passengers. It is not obvious how to capture the idea of comfortable driving in a mathematical expression. It is fairly straightforward, however, to collect demonstrations of comfortable driving from human drivers. There are many other control tasks where providing examples of the desired behaviour is easier than defining such behaviour formally (e.g., teaching a robot to manipulate objects). The growing research interest in learning from demonstrations (LfD) for robot control  reflects the need for a well-defined controller design methodology for such tasks. In this work, we propose a methodology that uses expert demonstrations to construct a stabilizing controller.
There are many examples in the literature, where various LfD methodologies have been applied to robots . The most popular application of LfD so far is in robotic manipulators. More specifically, LfD is used to teach manipulators skills to perform tasks in manufacturing , health-care [48, 28], and human-robot interaction [30, 40]. In addition, LfD has been applied with significant success to ground vehicles [10, 37], aerial vehicles [24, 1], bipedal robots [12, 32], and quadrupedal robots [26, 33]. These examples illustrate that, for these platforms, there exist control tasks for which LfD techniques are preferable to traditional control approaches.
1.2. Related work
In this section, we describe the previous work in learning from demonstrations to indicate where our approach lies within the existing landscape. This is in no way a comprehensive account of the literature on learning from demonstrations, but rather an overview of approaches related to ours (please refer to  or  for a description of the literature on LfD).
Policy-learning LfD methods
, to which this work belongs, assume that there exists a mapping from state (or observations) to control input that dictates the expert’s behaviour. This mapping is referred to as the expert’s policy. The goal of these methods is to find (or approximate) the expert’s policy given expert demonstrations. In many machine-learning-based LfD methods, policy learning is viewed as a supervised-learning problem where states and control inputs are treated as features and labels, respectively. We refer to these methods asbehavioural cloning methods. Pioneered in the 80s by works like , this class of methods is still popular today. Behavioural cloning methods are typically agnostic to the nature of the expert — demonstrations can be provided by a human (see [5, 10]), an offline optimal controller (see [29, 9]), or a controller with access to privileged state information (see [8, 24]). They do, however, require a large number of demonstrations to work well in practice and, if trained solely on data from unmodified expert demonstrations, generate unstable policies that cannot recover from drifts or disturbances . The latter problem can be fixed using online meta-algorithms like DAgger  which ensure that training data includes observations of recoveries from perturbations. Using such algorithms, however, comes at the expense of enlarging the training dataset. Moreover, the works on behavioural cloning typically provide few formal stability guarantees and, instead, illustrate performance with experiments.
Currently, there is a concerted effort to develop policy-learning LfD methods that improve on existing techniques using tools from control theory. In that context, the work that is closest to ours is described in , where the authors use convex optimization to construct a linear policy that is both close to expert demonstrations and stabilizes a linear system. They guarantee that the resulting controller is optimal with respect to some quadratic cost by adding an additional set of constraints (originally proposed in ) to the optimization problem. This work has been extended in  to enforce other properties, such as stability, optimality, and -robustness. Our methodology is different from those in  and  because we do not assume the expert to be a linear time-invariant controller.
In this paper, we propose a methodology for constructing a controller for a known nonlinear system from a finite number of expert demonstrations of desired behaviour, provided their number exceeds the number of states and the demonstrations are sufficiently long. Our approach consists of two steps:
use feedback linearization to transform the nonlinear system into a chain of integrators;
use affine combinations of demonstrations in the transformed coordinates to construct a control law stabilizing the original system.
The expert demonstrations are assumed to be of finite-length, whereas the resulting controller is expected to control the system indefinitely, making this a non-trivial problem to address. In this paper, we formally prove the learned controller asymptotically stabilizes the system. Furthermore, in case there are more demonstrations than states, we determine which subset of demonstrations needs to be chosen to minimize the error between the trajectory of the learned controller and the trajectory of the expert controller. To demonstrate the feasibility of this methodology, we apply it to the problem of quadrotor control. Unlike , our methodology produces a controller that is time-varying and not linear in the original coordinates. This reflects our belief that, in many cases, the expert demonstration is produced by a nonlinear controller. We also extend the proposed methodology beyond the class of feedback linearizable systems by using the embedding technique described in  and demonstrate its feasibility on the classical example of the ball-and-beam system.
A preliminary version of this methodology was introduced in . In , it was combined together with the data-driven control results from  to learn to control unknown SISO systems from demonstrations. This paper provides a unified presentation of the results from , as well as several new results, such as the discussion on the optimality of the controller approximation error and the extension of the results beyond the class of feedback linearizable systems.
2. Problem Statement and Preliminaries
2.1. Notations and basic definitions
The notation used in this paper is fairly standard. The integers are denoted by , the natural numbers, including zero, by , the real numbers by , the positive real numbers by , and the non-negative real numbers by . We denote by (or by ) the standard Euclidean norm or the induced matrix 2-norm; and by the matrix Frobenius norm. A set of vectors
the matrix Frobenius norm. A set of vectorsin is affinely independent if the set is linearly independent.
A function is of class if is continuous, strictly increasing, and . If is also unbounded, it is of class . A function is of class if, for fixed , is of class and decreases to as for each fixed .
The Lie derivative of a function along a vector field , given by , is denoted by . We use the notation for the iterated Lie derivative, i.e., , with . Given open sets and , a smooth map is called a diffeomorphism from to if it is a bijection and its inverse is smooth.
Consider the continuous-time system:
Consider the continuous-time control system:
where is the state, is the input, and is a smooth function. The system (3) is said to be input-to-state stable (ISS) if there exist and such that for any and any bounded input , the following is satisfied:
Let be a set of points in . A point with is called an affine combination of points in . If, in addition, for all , then is a convex combination of points in .
2.2. Problem Statement
Consider a known continuous-time control-affine system:
where and are the state and the input, respectively; and , are smooth functions. Assume that the origin is an equilibrium point of (5). We call a pair a solution of the system (5) if, for all , the equation (5) is satisfied. Furthermore, we refer to the functions and as a trajectory and a control input of the system (5).
We say that a controller is asymptotically stabilizing for the system (5) if the origin is uniformly asymptotically stable for the system (5) with . Suppose there exists an unknown asymptotically stabilizing controller , which we call the expert controller. We assume that is smooth. Our goal is to learn a controller such that having asymptotically stabilizes the origin of the system (5). Towards this goal, we use a set of finite-length expert solutions of (5), where: for each , the trajectory and the control input are smooth and satisfy for all ; is the length of a solution; and . We also ascertain that the “trivial” expert solution, wherein and for all , is included in .
In practice, we can record the values of continuous solutions provided by the expert only at certain sampling instants. In this work, however, we choose to work in continuous-time to simplify the theoretical analysis. We can do this without sacrificing practical applicability because it is well-known that continuous-time controller designs can be implemented via emulation and still guarantee stability .
We make the assumption that the system (5) is feedback linearizable on an open set containing the origin and the expert demonstrations belong to for all . To avoid the cumbersome notation that comes with feedback linearization of multiple-input systems, we assume that , that is, the system (5) only has a single input. Readers familiar with feedback linearization can verify that all the results extend to multiple-input case, mutatis mutandis (refer to [21, Ch. 4-5] for a complete introduction to feedback linearization). In the single-input case, the system (5) is feedback linearizable on the open set if there is an output function that has relative degree , i.e., for all , for and . Moreover, the map:
is a diffeomorphism from to its image , i.e., the inverse exists and is also smooth. We further assume, without loss of generality, that .
3. Learning a stabilizing controller from expert demonstrations
Here, we describe the methodology for constructing an asymptotically stabilizing controller when . We consider the case when in Section 4.
3.1. Feedback linearization
where and . The feedback law:
further transforms the system (5) into the system given by:
where is a Brunovsky pair.
The expert controller in the -coordinates is given by . The smoothness of implies that the function is also smooth.
3.2. Expert demonstrations
Recall that the set of demonstrations consists of solutions of the system (5). Using (6) and (8), we can represent the demonstrations in -coordinates. We denote the resulting set by , where functions and are given by:
for all and for all . We define the set of demonstrations evaluated at time as:
It can be easily verified that the demonstrations in satisfy the dynamics (9) and .
3.3. Constructing the learned controller
We denote by the controller learned from the expert demonstrations. We begin by partitioning time into intervals of length and indexing these intervals with . Let us construct the following matrices for :
Our first attempt at constructing the learned controller, which we improve upon later, is to use the piecewise-continuous controller for all , where:
This lemma can be verified by substitution. ∎
Affine independence of the set is a generic property, i.e., this is true for almost all expert demonstrations. In practice, if this set is not affinely independent, a user can eliminate the affinely dependent demonstrations and request the expert to provide additional demonstrations.
We note, however, that the control law (14) samples the state with a sampling time and essentially operates in open loop in between these samples. To allow for closed-loop control, we propose the improved controller that has, for all , the following form:
3.4. Stability of the learned controller
for all . Integrating the dynamics, we show that the sequence satisfies:
Consider the system (5) and assume it is feedback linearizable on an open set containing the origin. Let and suppose we are given a finite set of demonstrations generated by the system (5), in closed loop with a smooth asymptotically stabilizing controller , and satisfying for all . Assume that is affinely independent for all . Then, there is a such that for all , the origin of system (5) in closed-loop with controller (8)-(15) is uniformly asymptotically stable.
The asymptotic stability of (5) and (9) are equivalent on and , and, therefore, the set given by (10) and (11) also consists of asymptotically stable solutions, i.e., there exists such that for all :
We claim that, for any constants , there exists such that for all . This claim will be shown using an argument similar to that of the proof of Lemma 16 in . Using Lemma 4.3 from , there exist class functions such that for all . Let . Define, for all , to be the solution of and obtain:
Since is a continuous function and is compact, the extreme value theorem implies that is well-defined. For all , it is true that:
Using the previous claim with , and , we conclude the existence of such that, for all , the following inequality holds:
for all . Therefore, by (20), for all and for all , we have:
According to stability conditions for linear discrete-time systems (see Theorem 10.9 in ), the equation (23) implies that, for all , the system (18) is uniformly exponentially stable. From , we know that uniform exponential stability of the sampled-data system (18) implies uniform exponential stability of the system (9)-(15) because the matrices are bounded for . Uniform asymptotic stability of the origin for the system (9)-(15) in the -coordinates implies uniform asymptotic stability of the origin for the feedback equivalent system (5)-(8)-(15) in -coordinates . ∎
Theorem 3.4 shows the existence of such that for all . In practice, a user can determine satisfying this condition by directly computing for various .
The fact that we assume feedback linearizability on some open set presents the user with the opportunity to use either local or global feedback linearization results, depending on what their application allows for. We recommend  as a good starting point to find conditions for both local (see Theorem 4.2.3 in ) and global (see Theorem 9.1.1 in ) feedback linearizability.
In Theorem 3.4, we provide a guarantee the learned controller stabilizes the system at the origin. This result can also be useful when the objective of the learned controller is to track a trajectory. The key idea is to recast the problem of trajectory tracking into that of stabilizing the error dynamics (see Section 4.5 in ). We consider this generality of the learned controller to be a strength of this approach. We will experimentally illustrate this in Section 6.1.
Although we assume in this work an exact knowledge of the state, in most applications, the state is estimated via an observer. Depending on the design of the observer, the stability results of our methodology may also vary. To give an example, using Lemma III.8 from
Although we assume in this work an exact knowledge of the state, in most applications, the state is estimated via an observer. Depending on the design of the observer, the stability results of our methodology may also vary. To give an example, using Lemma III.8 from, we can show that, with a well-designed sampled-data observer providing state estimates of both the expert demonstrations and the current state, we can still retain asymptotic stability. In general, however, a persistent error between the state estimate and the current state can weaken the guarantee of asymptotic stability guarantee of the closed-loop system to that of practical stability.
4. Learning from more than expert demonstrations
Here, we extend the previous results to the case where more than . For every interval of length , we show how to select a subset of demonstrations that results in the best approximation of the expert controller.
We begin by reviewing several key concepts from multivariate linear interpolation. Let points are on the same hyperplane and no
We begin by reviewing several key concepts from multivariate linear interpolation. Letbe a finite set of points in . The convex hull of a set , denoted , is the set of all convex combinations of points in . For any , we define the subset . A Cartesian product of two sets has a natural left projection map (resp., right projection map ) given by (resp., ). An -simplex is the convex hull of a set of affinely independent points. A triangulation of points in , denoted , is a collection of -simplices such that their vertices are points in , their interiors are disjoint, and their union is . We denote the -simplex in containing by and define a vertex index set associated with in , denoted , as to satisfy . The Delaunay triangulation of , denoted , is a triangulation with the property that the circum-hypersphere of every -simplex in the triangulation contains no point from in its interior. It is unique if no
points are on the same hyperplane and nopoints are on the same hypersphere .
Let be an unknown function. Given a finite set of points and a set of function values , an interpolant is an approximation of that satisfies for all . We define an interpolant , called a piecewise-linear interpolant based on , as:
4.2. Constructing the learned controller
Let us describe the construction of the controller for . Define and . We partition time into intervals of length , indexed by . For each , we propose using the piecewise-continuous control law , where is defined as follows:
For , the value of is given by the value at of a piecewise-linear interpolant . Since a piecewise-linear interpolant is determined by an associated triangulation , this implies that there is a family of possible learned controllers we can construct from . Moreover, the value of the interpolant depends only on the values of and , where is a vertex set associated with in .
For , let be the Euclidean projection of onto . Define the index set and express as an affine combination . Then, the value of is given by .
In both cases, the controller can be concisely expressed if, given a vertex index set for and , we construct the following matrices:
Note that, in the absence of uncertainties and disturbances, by Lemma 3.2, the coefficients satisfy:
Therefore, for all , the controller (26) applies the input equal to that applied by the following controller:
Incidentally, this corresponds to the value of the piecewise-linear interpolant at .
4.3. Stability of the learned controller
Let us define the collection of index sets , where each selects vertices of an -simplex in and . Note that is a finite set because there are only finitely many -simplices in . Suppose the index set associated with in is for some . Assuming (27) holds, the system (9) in closed loop with (26) is given by:
for all . Integrating the dynamics shows that the sequence satisfies:
Note that now, instead of a single monodromy matrix, we have a set of monodromy matrices .
The following result is an extension of Theorem 3.4 for demonstrations.
Consider the system (5) and assume it is feedback linearizable on an open set containing the origin. Let and suppose we are given a finite set of demonstrations generated by the system (5), in closed loop with a smooth asymptotically stabilizing controller , and satisfying for all . Assume that is affinely independent for all . Then, there exists a such that for all , the origin of system (5) in closed-loop with controller (8)-(26) is uniformly asymptotically stable.
The proof of Theorem 3.4 implies the existence of such that for all . We choose . The system (9) in closed loop with controller (26) can be represented as a switched system (30), where is a switching sequence. By Theorem 3 in , the fact that for all and implies that, for any switching signal , the system (30) is uniformly exponentially stable. Since the matrices are bounded for , the system (9) in closed loop with controller (26) is uniformly exponentially stable. Uniform asymptotic stability of the origin for the system (9)-(26) in the -coordinates implies uniform asymptotic stability of the origin for the feedback equivalent system (5)-(8)-(26) in -coordinates . ∎
4.4. Optimality of the learned controller
Recall that the piecewise-linear interpolant defining the controller depends on the choice of the triangulation . Assuming (27) holds, this choice reduces to the choice of the triangulation , which dictates the index set of demonstrations used to construct the solution for each interval . Without loss of generality, in what follows we discuss the solutions on the interval only — a solution on can be represented as a solution on with the initial condition equal to .
Typically, there are several triangulations one can define given a set of sample points . We want our choice of triangulation to result in closed-loop trajectories that approximate expert trajectories well for any initial state distinct from . More precisely, we want to find a triangulation that best approximates the function , which defines solutions of (9) under the expert controller , by the function , which defines the solutions of (9) under the learned controller . That is, we want solution to:
where is the class of functions to which the expert solutions belong. We can view (31) as a game where we pick , and the adversary, upon seeing our choice of , picks to maximize the cost.
Let us leverage the properties has by virtue of describing solutions of (9) under the expert controller to determine the class . We will use the notation for . By Theorem 4.1 in [16, Ch. V], since is a smooth function, the Hessians of the coordinate functions of the solution are continuous with respect to and . By the extreme value theorem, compactness of implies that, for every , there exists such that for all and . Thus, the norms of the Hessians of the coordinate functions can be bounded by . We denote the class of functions whose coordinate functions have the Hessian norm smaller or equal to by . For a fixed , and, therefore, the function belongs to , the set of all functions from to .
For any and any learned controller , the worst-case trajectory approximation error on the interval is given by:
where is the trajectory of the system (9) with the initial condition under the expert controller , is the trajectory of the system (9) with the same initial condition under the learned controller , and is the set of all functions from to . The smallest worst-case trajectory approximation error on the interval is given by: