On the Design of LQR Kernels for Efficient Controller Learning

09/20/2017 ∙ by Alonso Marco, et al. ∙ 0

Finding optimal feedback controllers for nonlinear dynamic systems from data is hard. Recently, Bayesian optimization (BO) has been proposed as a powerful framework for direct controller tuning from experimental trials. For selecting the next query point and finding the global optimum, BO relies on a probabilistic description of the latent objective function, typically a Gaussian process (GP). As is shown herein, GPs with a common kernel choice can, however, lead to poor learning outcomes on standard quadratic control problems. For a first-order system, we construct two kernels that specifically leverage the structure of the well-known Linear Quadratic Regulator (LQR), yet retain the flexibility of Bayesian nonparametric learning. Simulations of uncertain linear and nonlinear systems demonstrate that the LQR kernels yield superior learning performance.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

A core problem of learning control is to determine optimal feedback controllers for (partially) unknown nonlinear systems from experimental data. Reinforcement learning (RL)

[1, 2] is a promising framework for this, yet often requires performing many experiments on the physical system to even find suitable controllers, which limits the applicability of such techniques. Therefore, a lot of research effort has been invested into data efficiency of RL aiming at learning controllers from as few experiments as possible. Recently, Bayesian optimization (BO) has been proposed for RL as a promising approach in this direction. BO employs a probabilistic description of the latent objective function (typically a Gaussian process (GP)), which allows for selecting next control experiments in a principled manner, e.g., to maximize information gain [3] or perform safe exploration [4].

While BO provides a promising framework for learning controllers in fairly general settings, the full power of Bayesian learning is often not exploited. A key advantage of Bayesian methods is that they allow for combining prior problem knowledge with learning from data in a principled manner. In case of GP models, this concerns specifically the choice of the kernel, which captures the covariance between function values at different inputs and is thus the core component to model prior knowledge about the function shape. By choosing standard kernels, however, naive BO approaches do often not exploit this opportunity to improve learning performance.

In this paper, we show how structural knowledge about the optimal control problem at hand can be leveraged for designing specific kernels in order to improve data efficiency in learning control. For a first-order nonlinear quadratic optimal control problem, we propose two LQR kernels that leverage the structure of the famous Linear Quadratic Regulator (LQR) problem given in form of an approximate linear model of the true nonlinear dynamics. The proposed kernels leverage the structure of the LQR problem, while retaining the flexibility of nonparametric GP regression.

Contributions

In detail, this paper makes the following contributions:

  1. [topsep=0pt]

  2. We discuss how the structure of the well-known LQR problem can be leveraged for efficient learning of controllers for nonlinear systems.

  3. This discussion leads to the proposal of two new kernels for GP regression in the context of learning control: the parametric and nonparametric LQR kernels.

  4. The improved learning performance achieved with these kernels over a standard kernel is demonstrated through numerical simulations.

Related work

BO for learning controllers has recently been considered, for example in [3, 5, 4, 6, 7], in different flavors. While [3] and [5] focus on data efficiency by maximizing the information gain in each iteration, [4] and [6] propose methods for safe exploration. Different BO algorithms are compared in [7]. These works all consider tuning of state-feedback controllers with a quadratic cost (possibly saturated) similar to the setting herein, but using standard kernels.

The design of customized kernels for GP regression has been considered before in the context of control and robotics for related problems. A kernel for bipedal locomotion, which captures typical gait characteristics, is proposed in [8]. In [9], an impedance-based model is incorporated as prior knowledge for improved predictions in human-robot interaction. For the problem of maximizing power generation in photovoltaic plants, the authors in [10] incorporate explicit basis functions about known power curves in the kernel. In [5], a kernel is designed to model information from simulation and physical experiments, in order to leverage both sources of information for RL.

None of the above references considers the problem of incorporating the structure of the LQR problem for improving data-efficiency in learning control with BO.

Outline

We introduce the considered learning control problem in Sec. II, and summarize BO for RL in Sec. III, together with the necessary background on GPs. While the BO framework for learning control is introduced for general multivariate systems, we focus on the special case of a scalar problem thereafter and develop the LQR kernels in Sec. IV. Numerical results in Sec. V illustrate the improved learning performance of the proposed kernels over a standard kernel. The paper concludes with remarks in Sec. VI.

Notation

A Gaussian random variable

with mean

and variance

is denoted by . The expected value of is denoted by , while denotes the covariance of and . We also use .

Ii Learning Control Problem

We consider regulation of the nonlinear stochastic system

(1)

with state , control input , and random noise . We assume that the state

can be measured or otherwise estimated. The system dynamics

are unknown. The control objective is to find a state-feedback controller

(2)

with controller gain such that the quadratic cost

(3)

with symmetric weights and is minimized. A quadratic cost is a very common choice in optimal control [11] expressing the designer’s preference in the fundamental trade-off between control performance and control effort.

While the solution of the above problem is standard when the dynamics (1) are known and linear [11], here, we face the problem of optimal control under unknown and nonlinear dynamics.

To address this problem, we follow a direct RL approach [12], where the optimal controller is learned by directly evaluating the performance of candidate controllers on the real system (1) without learning an (intermediate) dynamics model. We employ Bayesian optimization (BO) as a popular approach for sequential global optimization, which leverages the information from previous trials in order to propose the next candidate controller so as to eventually find the optimum of (3). Because evaluations on the system are expensive, we seek to find the optimal controller with as few evaluations as possible. To this end, we address herein how to leverage information from a linear model that approximates the true system (1).

Iii Bayesian Optimization for Learning Control

We briefly introduce necessary background material on Gaussian processes (GPs) and Bayesian optimization (BO), before describing reinforcement learning with BO.

Iii-a Gaussian process regression

Let denote the free parameters of a feedback controller; that is, in (2), or any other parameterization. The dependence of the cost function (3) on is unknown a priori because of the lack of knowledge of the system (1). We use GP regression to approximate the function , , from data (i.e., noisy function evaluations)

(4)

Obtaining one data point corresponds to performing a closed-loop experiment on the true system (1) with controller , recording the state-input trajectory, and computing the cost from these data (in case of (3) a finite approximation is used with a sufficiently long horizon ).

A GP can be defined as a probability distribution over the space of functions

whose restriction to any finite number of function values is jointly Gaussian [13, p. 13]. A GP is specified through its prior mean function and covariance function . We write

Hence, is the expected function value, and captures the covariance between any two function values and is used to model uncertainty. The latter is also called kernel and must satisfy the following property to give rise to a valid covariance function:

Definition 1 (according to [13])

Let be a function, and be the symmetric matrix whose entries are computed as from the collection , . The function is called a positive semidefinite kernel if is positive semidefinite for any finite collection .

The matrix is called the Gram matrix.

A kernel typically has its own parameters, called hyperparameters, such as length scales or output variance. By choosing the prior mean and the kernel, the user can specify prior knowledge about the function such as expected shape, length scales, and smoothness properties.

In GP regression, learning a function amounts to predicting the (normal) distribution of function values

at arbitrary inputs based on previous evaluations. Given data points , the posterior mean and variance of can be stated in closed form as

(5)
(6)

where and .

In addition to computing the posterior distribution, the data can also be used to optimize hyperparameters in order to improve the model fit. A popular approach is maximizing the marginal likelihood of the data, , which is given in logarithmic form as [13, p. 19]

(7)

Iii-B Bayesian optimization

Bayesian optimization [14] denotes a class of global optimization methods, where a probabilistic description of the latent objective function is used for data-efficient optimization. Here, the probabilistic description is given by the GP . Given , a utility is defined, which is maximized in each iteration to find the next evaluation point

(8)

Different BO algorithms primarily distinguish in how the selection of the next evaluation (8) is formulated. Popular BO algorithms include expected improvement (EI) [15], probability of improvement (PI) [16], GP upper confidence bound (GP-UCB) [17], and entropy search (ES) [18].

The method developed herein is agnostic to the type of BO algorithm used. For the examples in Sec. V, we employ EI, which uses

(9)

for the optimization in (8), where represents the lowest function from current data set . EI thus selects the next query point where the expected improvement upon (under the GP of ) is maximal.

Iii-C Reinforcement learning with Bayesian optimization

Algorithm 1 summarizes direct RL using Bayesian optimization.

1:Specify objective function (e.g., (3) with finite horizon )
2:Specify GP prior (mean , kernel )
3:Initialize: controller ; data set
4:while (not terminated) do e.g., fixed number of experiments
5:      perform closed-loop experiment with controller
6:      compute from experimental data
7:      add evaluation to data set
8:      update GP posterior (5), (6)
9:      [optional:] optimize hyperparameters (e.g., maximize (7))
10:      compute next controller via (8)
11:end while
12:determine ‘best guess’ for the controller parameters, e.g., minimum of posterior mean (5)
13:return
Algorithm 1 Learning control with Bayesian optimization

This framework has been successfully applied in experimental settings to learn feedback controllers for the control problem of Sec. II and related scenarios. Berkenkamp et al[4] optimize a state-feedback controller (2) for quadrotor trajectory tracking using a safe BO algorithm [19], which builds on GP-UCB. Marco et al[3] propose to parametrize the feedback gain as LQR policy [20], whose optimal weights are learned using ES, which maximizes the information gain from each experiment. This algorithm has been extended in [5] to include information from different sources (such as simulation and physical experiments). Both methods [3, 5] were successfully applied to learn pole balancing controllers. Calandra et al[7] compare PI, EI, and GP-UCB for learning the parameters of a discrete event controller for a walking robot.

Iv Constructing LQR Kernels

In this section, we present two kernels that are specifically designed for the control problem of Sec. II and incorporate approximate model knowledge in form of a linear approximation to (1). Because for a linear system, the solution to the optimal control problem of Sec. II is the well-known LQR, we term these kernels LQR kernels. The following derivations are presented for a first-order system (1), where all variables are scalars (). Small letters are used in place of the capital ones to emphasize scalar variables (e.g.,  instead of in creftype 1 and instead of in (2)). We consider minimization of the cost (3), which is rewritten as

(10)

We start by considering a learning example with a standard kernel choice for the GP, which motivates why a specifically designed kernel can be desirable.

Iv-a Problems with standard kernel

The most common kernel choice in GP regression is arguably the squared exponential (SE) kernel [13]

(11)

with signal variance and length-scale as its hyperparameters. Let us consider the problem of learning the cost function (10) via GP regression with SE kernel for the following linear example:

Example 1

Let (1) be given by with .

Figure 1 shows the prior GP and the posterior GP after obtaining four data points (i.e., after four evaluations of controllers ). A few issues are apparent from the posterior: (i) the kernel has problems with the different length scales of the function ( is steep around , but rather flat in the center region); (ii) the GP does not generalize well to regions where no data has been seen (e.g., around , where the posterior mean resorts to the prior); and (iii) the overall fit is not very good.

Clearly, the fit will improve with more data, but, for efficient and fast learning of controllers, we are particularly interested in good fits from just few data points. Hence, we seek to improve the fitting properties of the GP by exploiting knowledge about the system (1) in terms of an approximate linear model.

Fig. 1: Prior and posterior GPs of Example 1 using the squared exponential kernel (hyperparameters: ,

). The thick line represents the GP mean, and the light blue area the GP variance (+/- two standard deviations). The true function is shown in the bottom plot in dashed gray, and data points in orange.

Iv-B Incorporating prior knowledge

In no practical situation, one has a perfect model of the system to be controlled. At the same time, it is often possible to obtain a rough model, e.g., from first principles modeling or some system identification procedure. Here, we assume that an uncertain linear model

(12)
(13)

is available as an approximation to (1), e.g., from linearization of a first principles model with (possibly) some uncertainty about the physical parameters.

In the following, we will consider controller gains such that the system (12) is guaranteed stable for all parameters (13). That is, we consider with

(14)

This restriction makes sense, for example, in safety critical applications, where one wants to avoid the risk of trying an unstable controller based on the system knowledge available (i.e., (12), (13)). Moreover, the restriction to will ensure that subsequent calculations are well-defined.

If and were known, the functional dependence of the cost on the controller gain in (2) for the linear system (12) could be derived using standard control theory.

Fact 1

Consider the system (12) with known parameters and , and let . Then, the cost (10) is given by111In the notation , we omit the parametric dependence on , and since is a multiplicative constant, which does not play a role in the later optimization, and we assume that and are fixed.

(15)

The controlled process is stable by assumption and thus converges to a stationary process with zero mean and variance , where is the unique positive solution to , [21, Theorem  3.1]. For stationary ,  creftype 10 resolves into

Equation (15) then follows.

If we are uncertain about and , a collection of possible costs creftype 15 emerge from all the possible combinations of and within their ranges creftype 13. We assume this collection of costs to be explained by a Gaussian process . While any arbitrary choice of and in creftype 12 is only an approximation to creftype 1, it yields a cost creftype 15 that contains useful structural information, which can be leveraged for faster learning controllers from data. In the next sections, we show how this prior knowledge can be exploited to construct LQR kernels.

Iv-C Parametric LQR kernel

A reasonable choice for is

(16)

where and are the midpoints of the uncertainty intervals (13).

Equation (16

) is a standard parametric model with a single feature

and Gaussian prior. It is well-known (see e.g. [13]) that is a GP,

(17)

where we have assumed , with kernel

(18)

and hyperparameters , , and . We refer to (18) as the parametric LQR kernel because it captures the cost function for the linear system with quadratic cost, and thus the structure of the LQR problem.

To illustrate the performance of the parametric LQR kernel , we revisit Example 1 using this kernel instead of the SE kernel. The top two graphs of Fig. 2 show the prior and posterior GP for the same data points as in Fig. 1 when using with hyperparameters and . We see that the posterior fit from only four data points is almost perfect and much better than the one in Fig. 1. This is, of course, not a big surprise because the hyperparameters match the true underlying system of Example 1 perfectly. The fitting performance deteriorates, however, if the hyperparameters are off. This can be seen in the bottom graph of Fig. 2, which shows the posterior GP with the same hyperparameters, but for the system

Example 2

with .

Fig. 2: GP fit using the parametric LQR kernel (hyperparameters: , , ). The color code is the same as in Fig. 1. The hyperparameters are exact for Example 1, while they are off by about 10% for Example 2.

To improve the fit in this situation, one can employ hyperparameter optimization (see Sec. III-A) to find improved parameters and that better explain the data. The simulation results in Sec. V will show that this can be a viable approach. An alternative is to design more flexible and expressive kernels, which allow for fitting more general models. This we discuss next.

Iv-D Nonparametric LQR kernel

The kernel (18) captures the structure of the cost function for the LQR problem with one specific linear model . A straightforward way to increase the flexibility of the kernel in order to fit more general problems is to use basis functions (or features) corresponding to models ,

(19)

with , . The derivation of the corresponding kernel is analogous to (18) (see [13, Sec. 2.7]) and yields

(20)

Same as (16), the model (19) represents a parametric model for the LQR cost . That is, its flexibility is essentially limited to the number of explicit features . Employing powerful kernel techniques [22], the parametric model can be turned into a nonparametric one, which includes an infinite number of features while retaining finite computational complexity. The key idea is to consider the kernel (20) in the limit of infinitely many features corresponding to models and . The derivation follows ideas similar to how the standard SE kernel can be derived from basic features, [13, p. 84].

Consider the partitions of and into equidistant intervals, and let and be the lower (or upper) interval limits. Consider the model (19

) with feature vector

which includes all combinations for , and the parametric prior and for some . The kernel (20) then becomes

(21)

Since is continuous on , the finite sum (21) converges to the Riemann integral in the limits as . We can thus define the nonparametric LQR kernel

(22)

for with the signal variance and the integration boundaries , , , and as hyperparameters. While the kernel in (22) represents the structure of the cost function (15) for an infinite number of models (all and ), its computation is finite consisting of solving the integral in (22). By contrast, the computational complexity of the parametric kernels (20) and (21) grows with the number of features . We prove that the kernel (22) is indeed a valid covariance function.

Proposition 1

is positive semidefinite for all .

Take any collection and any . Let be the Gram matrix for the kernel . Then

which completes the proof.

The above derivation corresponds to the kernel trick [13, 22], which is a core idea of kernel methods and behind many powerful learning algorithms. In essence, the kernel trick means to write a learning algorithm solely in terms of inner products of features and replacing those by a kernel222All computations for the GP regression, i.e., equations (5), (6), and (7), are expressed solely in terms of the kernel (and the mean ).. In particular, this allows for considering an infinite number of features, while retaining finite computation.

Figure 3 shows the prior and posterior GP for the nonparametric LQR kernel (22). Because the kernel is more flexible than (18), it can fit the cost functions for the two different models of Example 1 and Example 2 well.

Fig. 3: GP fit using the nonparametric LQR kernel (hyperparameters: , , , , and ). Colors are the same as in Fig. 1. Both examples are fitted well.

Iv-E A combined kernel

System (12) is an approximation to the true system (1). It is thus not desirable to fully commit to the model for solving the optimal control problem. This would mean to directly minimize (15) (which would result in the well-known LQR solution). On the other hand, the linear problem contains information about the structure of the optimization problem, which shall be useful also for optimization of the true nonlinear system (1), as long as we believe (12) to be a reasonable approximation thereof. In other words, we can expect the true cost function for the nonlinear problem to bear some similarity to (15).

We model the cost function (3) for the nonlinear system (1) as being composed of a part that stems from the approximation as LQR problem and an error term,

(23)

The term captures the error in the model that stems from the fact that the true problem is nonlinear. We model it as a standard GP, e.g., employing the SE kernel (11): . We can model as a GP creftype 17 using either the parametric LQR kernel creftype 18 or the nonparametric creftype 22 LQR kernel. Then, since the sum of two independent Gaussians is also Gaussian, it follows from (23) that also is a GP (see [13, Sec. 2.7]), with

where can be replaced by creftype 18 or creftype 22. By choosing the hyperparameters of the kernels, the designer can express how much he or she trusts the LQR versus the SE model. For example, means to fully rely on the LQR kernel.

V Simulations

In this section, we show statistical comparisons of the LQR kernels proposed in Sec. IV against the commonly used SE kernel, in two different settings. In the first setting, we evaluate the performance of each kernel in the context of GP regression. Specifically, we quantify the mismatch between the GP posterior mean, computed from a set of random evaluations, and the underlying cost function. In the second setting, we evaluate each kernel in the context of BO by comparing the learned minimum to the true global minimum.

The GP regression and BO experiments are presented in Sec. V-B and Sec. V-C, respectively, considering a linear system (1). In addition, we also evaluate the BO setting for a nonlinear system in Sec. V-D.

V-a Experimental choices

For the simulations in Sec. V-B and Sec. V-C, we consider the true system creftype 1 to be linear (i.e., creftype 12) with uncertain parameters and . We consider the optimal control problem as in Sec. II with . Feedback controllers creftype 2 are considered to be in the range (14), .

For each controller , the corresponding infinite-horizon LQR cost is given by creftype 15. In practice, only finite-horizon simulations can be realized. Therefore, the outcome of an experiment is noisy, as modeled in (4), with .

In each simulation, a different linear model () is obtained by uniformly sampling the ranges above, which yields a different underlying cost function creftype 15. Each simulation is repeated for four different kernels:

  • SE kernel: As described in creftype 11.

  • LQR kernel I: Parametric LQR kernel creftype 18 with fixed parameters (midpoints of uncertainty intervals).

  • LQR kernel II: Parametric LQR kernel creftype 18, with optimized from evaluations by maximizing creftype 7.

  • LQR kernel III: Nonparametric LQR kernel creftype 22.

The parametric LQR kernel creftype 18 is constructed taking the middle points of the uncertainty intervals, i.e., and . For the nonparametric LQR kernel creftype 22, the intervals serve as integration domains. The length scale of the SE kernel is computed as one fifth of the input domain, i.e., . The signal variance of the parametric and nonparametric LQR kernel, i.e., and , are normalized such that , where is the midpoint of . Since the variance of these two kernels grow fast toward the corners of the domain, we set for the SE kernel , for a fair comparison.

V-B GP regression setting

For this statistical comparison, we run 1000 simulations. In each simulation, we compute the GP posterior conditioned on two evaluations randomly sampled from the underlying cost function. We assess the quality of the regression by computing the root mean squared error (RMSE) between the true cost function and the GP posterior mean, both computed on a grid of 100 points over .

Fig. 4: Histograms of the root mean squared error (RMSE) between the true cost function and the GP posterior mean conditioned on two evaluations. The histograms are computed out of 1000 simulations.

Fig. 4 shows the histograms of the RMSE obtained with the different kernels. The LQR kernels clearly outperform the SE kernel in these experiments because they contain structural knowledge about the true cost, which contributes to a better GP fit, even with only two data points. The nonparametric kernel makes good predictions because it inherently contains information about all possible functions within their uncertainty ranges of (). However, poorly specified integration bounds will decrease its performance. The RMSE statistical analysis confirms this when the integration intervals on () are a 50 larger than their uncertainties. The parametric kernel with fixed hyperparameters (

) has a significant number of outliers since, in many cases, the data is queried from a cost function whose sampled (

) are far away from the ones of the kernel. The parametric kernel with hyperparameter optimization also leads to a better fit than SE, but has most outliers (about of the simulations) since hyperparameter optimization is not reliable with just two data points.

SE ker. SE ker. (*) LQR ker. I LQR ker. II LQR ker. III
1 2.76 (0.74) 2.39 (0.81) 1.01 (0.95) 1.98 (1.29) 1.31 (0.71)
2 2.49 (0.85) 2.42 (0.93) 1.02 (0.96) 1.09 (1.07) 1.22 (0.67)
5 1.83 (1.02) 1.31 (0.95) 1.10 (1.05) 0.45 (0.70) 1.20 (0.76)
10 1.13 (0.99) 0.98 (0.86) 0.98 (0.96) 0.20 (0.46) 1.11 (0.74)
TABLE I: RMSE averaged over 1000 simulations.

We have repeated these experiments with , , and evaluations. Table I shows the averaged RMSE over 1000 simulations for each kernel, and the corresponding standard deviation (in parentheses). The the outliers (i.e., any RMSE above 5) were excluded from these computations. In general, we see that the LQR kernel optimized from data performs better than the others for more than 2 evaluations.

For a fair comparison, we also include the SE kernel with hyperparameter optimization in the table (marked with an asterisk). Because it performs similar to the SE kernel, and it does not improve upon the LQR kernels, we leave it out of the discussion for the rest of the paper.

Remark: The hyperparameters () of the parametric LQR kernel are optimized from data by maximizing the marginal likelihood creftype 7. In a sense, optimizing the hyperparameters () of the LQR kernel from data can be considered similar to doing system identification on the linear system () using creftype 7 as performance metric.

V-C BO setting

In this section, we evaluate the performance of each kernel in the context of BO. For each BO run, the first evaluation is decided randomly within the range of controllers . Subsequent evaluations are acquired using the expected improvement (EI) method creftype 8, creftype 9. We stop the exploration after three evaluations and compute the instantaneous regret (i.e., the absolute error between the true minimum and the minimum of the GP posterior mean) as the outcome of each experiment.

Fig. 5: Histogram of the regret incurred by stopping BO after three evaluations. The system considered is the linear system creftype 12. The histogram is computed over 100 BO runs.

Fig. 5 shows the histogram of the regret for each kernel over 100 BO runs. The LQR kernels consistently outperform the SE kernel. The nonparametric kernel shows some outliers because in some cases the GP posterior mean grows large toward negative values, and so does its minimum. This is a wrong prediction of the underlying cost, defined positive. However, this minor issue can easily be detected, and the optimization procedure continued with a randomly sampled controller.

V-D BO setting for a nonlinear system

In this section, we use the same BO setting as in Sec. V-C, but consider now a nonlinear system (1), namely

(24)

with and uncertain parameters , . We control this system from to zero, using the same controller structure as the one described in Sec. V-A. In this case, the considered range of controllers is , which corresponds to creftype 14, reduced by a . The LQR kernels are built up using the linearized version of (24) around the zero equilibrium point, i.e., . Table II shows the regret average and standard deviation. As can be seen, the LQR kernels perform better than the SE kernel.

SE kernel LQR kernel I LQR kernel II LQR kernel III
2 1.34 (0.33) 0.30 (0.21) 0.32 (0.20) 0.35 (0.18)
3 0.49 (0.39) 0.33 (0.38) 0.31 (0.19) 0.32 (0.18)
4 0.35 (0.27) 0.36 (0.44) 0.32 (0.18) 0.32 (0.17)
5 0.36 (0.21) 0.31 (0.36) 0.32 (0.20) 0.32 (0.18)
TABLE II: Regret averaged over 100 BO runs for a nonlinear system

Vi Concluding Remarks

In this paper, we discussed how prior knowledge about the structure of an optimal control problem can be leveraged for data-efficient learning control. Specifically, for a nonlinear quadratic optimal control problem, we showed how an uncertain linear model approximating the true nonlinear dynamics can be exploited in a Bayesian setting. This led to the proposal of two novel kernels, a parametric and a nonparametric version of an LQR kernel, which incorporate the structure of the well-known LQR problem as prior knowledge. Numerical simulations herein demonstrate improved data efficiency over standard kernels, i.e., good controllers are learned from fewer experiments. We hope that the discussion and analysis herein also motivate further development of kernels tailored for other learning control problems.

Approaching the nonlinear quadratic optimal control problem presented herein with pure model-based methods can lead to superior performance for very accurate models compared to the proposed data-based approach. However, this paper shares the motivation of [3], which proposes a data-based approach when only poor models are available, and extends it by incorporating the LQR structure into the kernel.

The results herein are preliminary in the sense that the LQR kernels are developed for a first-order system. While this is the natural first step, future work will concern the extension of the ideas and derivations to multivariate systems in order to develop this into a powerful framework for learning control in practice. Moreover, we plan to validate the benefit of the new kernels in experiments on more realistic nonlinear examples or physical hardware such as those in [3] and [5], settings that pose challenging issues, like imperfect states measurement, among others.

References

  • [1] R. S. Sutton and A. G. Barto, Reinforcement learning: An introduction.   MIT press, 1998.
  • [2] J. Kober, J. A. Bagnell, and J. Peters, “Reinforcement learning in robotics: A survey,” The International Journal of Robotics Research, vol. 32, no. 11, pp. 1238–1274, 2013.
  • [3] A. Marco, P. Hennig, J. Bohg, S. Schaal, and S. Trimpe, “Automatic LQR tuning based on Gaussian process global optimization,” in IEEE International Conference on Robotics and Automation, 2016, pp. 270–277.
  • [4] F. Berkenkamp, A. P. Schoellig, and A. Krause, “Safe controller optimization for quadrotors with Gaussian processes,” in IEEE International Conference on Robotics and Automation, 2016, pp. 491–496.
  • [5] A. Marco, F. Berkenkamp, P. Hennig, A. P. Schoellig, A. Krause, S. Schaal, and S. Trimpe, “Virtual vs. real: Trading off simulations and physical experiments in reinforcement learning with Bayesian optimization,” in IEEE International Conference on Robotics and Automation, 2017, pp. 1557–1563.
  • [6]

    J. Schreiter, D. Nguyen-Tuong, M. Eberts, B. Bischoff, H. Markert, and M. Toussaint, “Safe exploration for active learning with gaussian processes,” in

    Machine Learning and Knowledge Discovery in Databases: European Conference, 2015, pp. 133–149.
  • [7] R. Calandra, A. Seyfarth, J. Peters, and M. P. Deisenroth, “Bayesian optimization for learning gaits under uncertainty,”

    Annals of Mathematics and Artificial Intelligence

    , pp. 1–19, 2015.
  • [8] R. Antonova, A. Rai, and C. G. Atkeson, “Sample efficient optimization for learning controllers for bipedal locomotion,” in IEEE International Conference on Humanoid Robots, 2016, pp. 22–28.
  • [9] J. R. Medina, S. Endo, and S. Hirche, “Impedance-based Gaussian processes for predicting human behavior during physical interaction,” in IEEE International Conference on Robotics and Automation, 2016, pp. 3055–3061.
  • [10] H. Abdelrahman, F. Berkenkamp, J. Poland, and A. Krause, “Bayesian optimization for maximum power point tracking in photovoltaic power plants,” in European Control Conference, 2016, pp. 2078–2083.
  • [11] B. D. O. Anderson and J. B. Moore, Optimal Control: Linear Quadratic Methods.   Mineola, New York: Dover Publications, 2007.
  • [12] S. Schaal and C. Atkeson, “Learning control in robotics,” IEEE Robotics Automation Magazine, vol. 17, no. 2, pp. 20–29, Jun. 2010.
  • [13] C. Rasmussen and C. Williams, Gaussian Processes for Machine Learning.   MIT Press, 2006.
  • [14] J. Mockus, Bayesian approach to global optimization: theory and applications, ser. Mathematics and its applications.   Kluwer Academic Publishers, 1989, vol. 37.
  • [15] D. R. Jones, M. Schonlau, and W. J. Welch, “Efficient global optimization of expensive black-box functions,” Journal of Global Optimization, vol. 13, no. 4, pp. 455–492, 1998.
  • [16] H. J. Kushner, “A new method of locating the maximum point of an arbitrary multipeak curve in the presence of noise,” Journal of Basic Engineering, vol. 86, no. 1, pp. 97–106, 1964.
  • [17] N. Srinivas, A. Krause, S. M. Kakade, and M. Seeger, “Gaussian process optimization in the bandit setting: No regret and experimental design,” in International Conference on Machine Learning, 2010, pp. 1015–1022.
  • [18] P. Hennig and C. J. Schuler, “Entropy search for information-efficient global optimization,” The Journal of Machine Learning Research, vol. 13, no. 1, pp. 1809–1837, 2012.
  • [19] Y. Sui, A. Gotovos, J. W. Burdick, and A. Krause, “Safe exploration for optimization with gaussian processes,” in International Conference on Machine Learning, 2015, pp. 997–1005.
  • [20] S. Trimpe, A. Millane, S. Doessegger, and R. D’Andrea, “A self-tuning LQR approach demonstrated on an inverted pendulum,” in IFAC World Congress, Cape Town, South Africa, Aug. 2014, pp. 11 281–11 287.
  • [21] B. D. O. Anderson and J. B. Moore, Optimal Filtering.   Mineola, New York: Dover Publications, 2005.
  • [22] B. Schölkopf and A. J. Smola, Learning with kernels.   MIT Press, 2002.