I Introduction
In reinforcement learning (RL) [29], the goal is to learn a controller to perform a desired task from the data produced by the interaction between the learning agent and its environment. In this framework, autonomous agents are trained to maximize their return. It is common to assume that such agents will be deployed in conditions that are similar, if not equal, to those they were trained in. In this case, a returnmaximizing agent performs well at test time. However, in real world applications, this assumption may be violated. For example, in robotics, we can use RL to learn to fly a drone indoor. However, later on we may use the same drone to carry a payload in a windy environment. The new environmental conditions and the possible deterioration of the drone components due to their usage may result in a poor, if not catastrophic, performance of the learned controller. Another scenario where training and testing conditions differ substantially is the simtoreal setting, i.e., when we deploy a controller trained in simulation on a realworld agent.
Considering robustness alongside performance when learning a controller can limit performance degradation due to different training and testing environments. In special cases, these goals may be aligned, and a highperforming controller can also be robust. This is the case for the Linear Quadratic Regulator (LQR), a linear statefeedback controller that is optimal for the case of linear dynamics, quadratic cost, and perfect state measurements. It is wellknown that the LQR exhibits strong robustness indicators, such as gain and phase margins [2]. While performance and robustness go hand in hand for the LQR, they are often conflicting in other cases. For example, a celebrated result in control theory shows that the Linear Quadratic Gaussian (LQG) regulator  the noisy counterpart of the LQR  can be arbitrarily close to instability, despite being optimal [12]. Thus, in general, we need to tradeoff between performance and robustness [7].
Contributions. While many works investigating the performance/robustness tradeoff exist in both the RL and control theory literature for the modelbased setting, few results are known for the modelfree scenario. However, there are several realworld scenarios where models are not available, inaccurate, or too expensive to use, but robustness is fundamental. Thus, in this paper, we introduce the first dataefficient, robust, modelfree RL method based on policy optimization with multiobjective Bayesian optimization (MOBO). In particular, these are our contributions:

We formulate the robust, modelfree RL as a multiobjective optimization problem.

We propose a modelfree, datadriven evaluation of delay and gain margins, two common robustness indicators from the modelbased setting (where they are computed analytically).

We solve this problem efficiently with expected hypervolume improvement (EHI).

We introduce the first method that can learn robust controllers directly on hardware in a modelfree fashion.

We show how our approach outperforms nonrobust policy optimization in evaluations on a Furuta pendulum for both a simtoreal and a pure hardware setting.
Related work. Robustness has been widely investigated in control theory [32], and standard robust control techniques for linear systems include loop transfer recovery [26], control, and synthesis [32, 15]
. However, these methods typically assume the availability of a model, and none of these includes a learning component. Recently, robustness has drawn attention in datadriven settings, giving rise to the field of robust, modelbased RL. Robust Markov decision processes study the RL problem when the transition model is subject to known and bounded uncertainties. For example,
[22] studies the dynamic programming recursion in this setting. Other methods that consider parametric uncertainties include [28, 11]. All the previous methods are modelbased.Robustness and performance are typical objectives in control design, which often conflict each other, thus requiring design tradeoffs [7, 5]. In the model free literature, this tradeoff is often fixed a priori and the resulting problem is solved with standard optimization methods. In [21] a weighted cost that balances performance and robustness is optimized. In [31] robust controllers are learned via gradient ascent with random multiplicative noise on the control action. In [23, 20] external, adversarial disturbances are used instead. In these works, the upper bound on the magnitude of the disturbance implicitly balances robustness and performance. However, setting this tradeoff is often not intuitive and, in case the requirements are misspecified or updated, a new controller must be learned. Alternatively, robust control design methods based on multiobjective optimization explore the spectrum of such tradeoffs. The work in [14]
gives a review of such methods, with a focus on genetic algorithms, which, due to their low data efficiency require the model to compute the robustness indices.
Modelfree RL algorithms are typically validated in simulations due to their high sample complexity. However, in robotics, it is crucial to test these methods on hardware. Bayesian optimization (BO) [19, 27] has been successfully applied to learn lowdimensional controllers for hardware systems. For example, [6] learns to control the coordinate for a quadrotor hovering task with a linear controller, [17] learns a linear state feedback controller for a cartpole system in a simtoreal setting and [8, 3] tune the parameters of adhoc controllers for locomotion tasks. However, none of these methods considers robustness, making ours the first one to learn robust controllers from data directly on hardware.
MOBO is the branch of BO that solves multiobjective problems. MOBO algorithms include EHI [13], PAL [34], and PESMO [16]
. They have been applied to several tasks including trading off prediction speed and accuracy in machine learning models. However, they have rarely been applied to RL. To the best of our knowledge, this has been done only in
[30, 4], where a tradeoff between frontal camera movement and forward speed is found for a snakelike robot, for homoschedastic and heteroschedastic noise respectively. Robustness is not explicitly treated in these works.Ii Problem Statement
In this section, we introduce our formulation of robust modelfree RL as a multiobjective optimization problem. For ease of exposition, we limit ourselves to two objectives. However, this approach naturally extends to the any number of objectives, for example, multiple robustness indicators.
We assume we have a system with unknown dynamics, , and unknown observation model, ,
(1) 
where is the state, is the control input, is the observation and and are the process and sensor noise. An RL agent aims at learning a controller , i.e., a mapping parametrized by from an observation to an action that allows it to complete its task. Policy optimization algorithms are a class of modelfree RL methods that solve this problem by optimizing the performance of a given controller for the task at hand as a function of the parameters . Concretely, given a performance metric , standard, nonrobust policy optimization algorithms aim to find . In this work, we consider regulation tasks, i.e., bringing and keeping the system in a desired goal state . This includes common problems like stabilization, setpoint tracking, or disturbance rejection. The performance indicator encodes these objectives.
To extend this framework to the robustnessaware case, we use a second function that measures the robustness of a controller. Since both the dynamics and the observation model are unknown, we must evaluate or approximate the value of from data. In Sec. IIIB, we introduce the gain and the delay margin, two alternatives for that are commonly used in modelbased control and we discuss how to evaluate them in the modelfree setting.
We aim at finding the best controller in terms of performance and robustness, as measured by and . However, since we compare controllers based on multiple, and possibly conflicting, criteria, we cannot define a single best controller. Given a controller , we denote with the array containing its performance and robustness values. To compare two controllers and , we use the canonical partial order over : iff for . This induces a relation in the controller space : iff . If , we say that dominates . The Pareto set is the set of nondominated points in the domain, i.e., iff such that for all . The Pareto front is the set of function values corresponding to the Pareto set. The Pareto set is optimal in the sense that, for each point in it, it is not possible to find another point in the domain that improves the value of one objective without degrading another [10]. The goal of this paper is to approximate from data.
Fig. 1 represent our problem graphically: we suggest a controller, we evaluate its performance and robustness on the system and we select a new controller based on these observations to find an approximation of the Pareto front.
Iii Learning the Performancerobustness Tradeoff
For the robust, modelfree RL setting we consider, we propose to learn the Pareto front characterizing the performancerobustness tradeoff of a given system with MOBO. Here, we describe the necessary components to solve our problem in a data efficient way: MOBO and the robustness and performance indicators used in our experiments. Moreover, we discuss how to evaluate such indicators from data in a modelfree fashion.
Iiia Multiobjective Bayesian optimization
MOBO algorithms solve multiobjective optimization problems by sequentially querying the objective at different inputs and obtaining noisy evaluations of the corresponding values. They build a statistical model of the objectives to capture the belief over them given the data available. They measure how informative a point in the domain is about the problem solution with an acquisition function. At every iteration, they evaluate the objective at the most informative point, as measured by the acquisition function. Thus, the complex multiobjective optimization problem is decomposed into a sequence of simpler scalarvalued optimization problems. In the following, we describe the surrogate model and the acquisition function used in this work.
Intrinsic Model of Coregionalization A singleoutput Gaussian process (GP) [25]
is a probability distribution over the space of functions of the form
, such that the joint distribution of the function values computed over any finite subset of the domain follows a multivariate Gaussian distribution. A GP is fully specified by a mean function
, which, w.l.o.g., is usually assumed to be zero, for all , and a covariance function, or kernel, . The kernel encodes the strength of statistical correlation between two latent function values and, therefore, it expresses our prior belief about the function behavior.Similarly, a output GP is a probability distribution over the space of functions of the form . The difference with respect to singleoutput GPs is that, in this case, the kernel must capture the correlation across different output dimensions in addition to the correlation of function values at different inputs. The simplest way of doing this is by assuming that each output is independent. However, this model disregards the fundamental tradeoff between robustness and performance that we are considering. For a review on kernels for multioutput GPs, see [1]. In this work, we use the intrinsic model of coregionalization (ICM), which defines the covariance between the value of and the value of by separating the input and the output contribution as follows, . In this case, we say , where is a dimensional mean function, is a scalarvalued kernel and is a matrix describing the correlation in the output space (more details on in Sec. IV). Given noisy observations of , , with , where is i.i.d. Gaussian noise, we can compute the posterior distribution of the function values conditioned on at a target input in closed form as . We denote with the inputs contained in and with the matrix with entries for and , then
(2)  
(3) 
where , with denoting the Kronecker product, has entries for and and is the
dimensional vector containing the concatenation of the observations in
.Expected Hypervolume Improvement EHI is an acquisition function introduced in [13], which selects inputs to evaluate based on a notion of improvement with respect to the incumbent solution. In multiobjective optimization, incumbent solutions take the form of approximations of the Pareto set, , whose quality is measured by the hypervolume indicator induced by the corresponding front, with respect to a reference . Formally, the hypervolume indicator of a set of points with respect to a reference , , is the Lebesgue measure of the hypervolume covered by the boxes that have an element in as upper corner and the reference as lower corner. It quantifies the size of the portion of the output space that is Paretodominated by the points in . Given an estimate of the Pareto front, , the hypervolume improvement of is defined as the relative improvement in hypervolume obtained by adding the function value at , , to , However, we do not know . Instead, we have a belief over its value expressed by the posterior distribution of the GP, which, in turn, induces a distribution over the hypervolume improvement corresponding to an input . The EHI acquisition function quantifies the informativeness of an input toward the solution of the multiobjective optimization problem through the expectation of this distribution,
(4) 
[13] shows how to compute the integral in creftype 4 in closed form.
IiiB Robustness
œ¬In general, robustness can have very different meanings. One may desire to ensure robustness to a certain class of disturbances, imperfections in the control system, or uncertainty in the process, for example. In control theory, the latter is often understood as robustness in the stricter sense. Specifically, robust stability assures that a controller stabilizes every member from a set of uncertain processes [32]. Such processes can, for example, be defined through a nominal process and variations thereof. Different variations lead to different robustness characterizations. Likewise, there are different notions of stability that are meaningful depending on the context. For example, for a deterministic system, asymptotic stability, i.e., as , where is an equilibrium of the system, is often used; for systems that are continuously excited, e.g., through noise, and thus cannot approach , one may seek the above limit to hold in expectation or practical stability in the sense of a bounded state, i.e., for all . A controller is unstable when the respective condition does not hold (e.g., no asymptotic convergence, or grows beyond any bounds).
While many sophisticated robustness metrics have been developed, stability margins such as gain and delay margins are some of the most common and intuitive ones [5, Sec. 9.3]. We consider these in this work and comment on alternatives in Sec. V. Below, we formally introduce them and we explain how to evaluate them in a modelfree setting. Notice that, our datadriven definitions can be extended to any setting where a success/failure outcome can be defined and, therefore, are not limited to stability considerations.
Gain margin. In classical control, the upper (lower) gain margin is defined for singleinputsingleoutput (SISO) linear systems as the largest factor (the smallest factor ) that can multiply the openloop transfer function so that the closedloop system is stable [33, Sec. 9.5]. As the openloop transfer function encodes both the process and the controller dynamics, the factor may represent uncertainty in the process gain or the actuator efficiency, for example. In this work, we consider a factor to be multiplied by the control action (i.e., ), which is equivalent to the definition for linear SISO systems, but can also be used for nonlinear ones. It quantifies how much we can lower/amplify the control action before making the system unstable. In a way, it quantifies how “far” we are from instability and, thus, how much we can tolerate differences between training and testing.
Delay margin. Similarly, we define the delay margin as the largest time delay on the measurement such that the controlled system is still stable. Formally, it is the largest value of such that the closedloop system with the delayed control action is stable. As delay in data transmission between sensor, controller, and actuator, and in the control computation are present in most control systems, the delay margin is a very relevant measure.
Estimate from data. While the indicators above can be readily computed for linear systems, they are difficult to compute analytically if the model is nonlinear, or impossible if no model is available, as considered herein. We describe an experiment to estimate the delay margin from data in a modelfree setting (those for the gain margins are analogous). For general nonlinear systems, stability with respect to an equilibrium is a local property. Thus, we assume we can reset the system to a state in the neighborhood of the equilibrium of interest, i.e., we have , where is a ball centered at of radius . We can establish whether the delay margin, denoted with , is larger or smaller than a delay by resetting the system near , deploying the delayed controller and evaluating the stability of the resulting trajectory.
In practice, two problems arise with this approach: (i) we can evaluate a finite number of delays with a finite number of experiments; and (ii) while stability is an asymptotic condition on the state, we do not know the state and we run finite experiments. The first problem requires us to select carefully the delays we evaluate. We know that increasing values of delay take a stable system closer to instability. Thus, given delays , we do a binary search to find the largest one for which the closedloop system is stable. This allows us to approximate the delay margin with experiments. Not knowing the state can be solved by estimating it from the noisy sensor measurements or by introducing a new definition of stability based on rather than . Concerning the finite trajectories we note that, in practical cases, it is rare to have small compounding deviations from resulting in a divergent behavior emerging only in the long run. Often, a controller makes the system converge to or diverge from within a short amount of time. In our experiments, we say that a controller stabilizes the system if, after a burnin time that accounts for the transient behavior, it keeps the state within a box around . Controllers with good margins are investigated further with longer experiments to eliminate potential outliers due to the finite trajectory issue.
Reliably estimating robustness indicators and stability of a system without a model is challenging. The estimation technique we presented is intuitive and easy to implement. While it does not provide formal guarantees on the estimation error, we show in Sec. IV that it is accurate enough to greatly improve the robustness of our algorithm with respect to the nonrobust policy optimization baseline.
IiiC Algorithm
In this section, we describe our robust policy optimization algorithm, for the pseudocode see Algorithm 1. At iteration , we select the controller that maximizes the EHI criterion. Then, we run two experiments to estimate its performance and robustness. For the performance, we introduce a state and action dependent reward and we define the return as the average reward obtained over an episode. The performance index is defined as the expectation of the return, which we approximate with a Monte Carlo estimate over multiple episodes. To estimate the robustness, we use the experiments from Sec. IIIB. We update the data set with the experiments results. Finally, we update the estimate of the Pareto front that is used to compute the EHI as the set of dominating points of the data set. Other options to compute such estimate from the posterior of the GP exist. However, they are computationally more expensive and they resulted in a similar performance in our experiments. In the end, the algorithm returns an estimate of the Pareto set and front. The choice of a controller from the Pareto set depends on the performancerobustness tradeoff required by the test applications and, therefore, the choice is left to the practitioner.
Iv Experimental Results
We compare the robust policy optimization algorithm in Algorithm 1 to its nonrobust counterpart based on scalar BO as, e.g., in [6, 18]. We use the scalar equivalent of EHI for the nonrobust case, i.e., the expected improvement (EI) algorithm [19]. We present two set of experiments: training controllers in simulation and directly on hardware, respectively. In both cases, the learned controllers are tested on the hardware in a set of different conditions.
System. We learn a controller for a Furuta pendulum [9] (see Figure 1), a system that is closely related to the wellknown cart pole. It replaces the cart with a rotary arm that rotates on the horizontal plane. In our experiments, we use the Qube Servo 2 by Quanser [24], a small scale Furuta pendulum. It uses a brushless DC motor to exert a torque on the rotary arm, and it is equipped with two incremental optical encoders with 2048 counts per revolution to measure the angle of the rotary arm and the pendulum. For simtoreal, we use the dynamics model provided in the Qube Servo 2 manual [24], which is a nonlinear rigid body model. A more detailed model is presented in [9].
Controller. We consider a state feedback controller to stabilize the pendulum about the vertical equilibrium. The system has four states, : the angular position of the rotary arm and the pendulum, , with being the vertical position, and the corresponding angular velocities, . We control the voltage applied to the motor, . We use the encoder readings as estimates of the angular positions, . We apply a lowpass filter to the difference of consecutive angular positions to estimate the angular velocities, . We aim to find a controller of the form .
Scaling and reward. We define a state and action dependent reward as the negative of a quadratic cost, with and . The performance associated to a controller is the expected average reward it induces on the system, that is, for a trajectory of duration , . To prevent one of the objectives from dominating the contribution to the hypervolume improvement in the EHI algorithm, we must normalize them. We control the range of the robustness indicators, see Sec. IIIB, and, therefore it is easy to rescale them to the range. We observe empirically that the unnormalized return ranges in . Thus, we clip every return value to this range and we rescale it to the interval. Since the pendulum incurs substantially different returns when a stabilizing or destabilizing controller is used, we cannot rescale the range linearly. Instead, we use a piecewise linear function. In particular, since we observe empirically that stabilizing controllers have a performance between 20 and 0, we rescale linearly the range to and the range to . This differentiates coarsely the quality of unstable controllers, and it gives a more refined scale over stable ones.
Standard  Motor noise  Sensor noise  Add  

Fail  Fail time (s)  Fail  Fail time (s)  Fail  Fail time (s)  Fail  Fail time (s)  
Scalar BO  0.150  80%  0.92  0.151  80%  0.97  0.151  80%  1.05  0.185  100%  1.03 
MOBODM  0.044  32%  4.61  0.038  20%  4.07  0.063  20%  4.32  0.126  84%  3.21 
MOBOGM  0.003  0%  0.013  0%  0.057  0%  0.004  0% 
Add  Add  Add and  Add , motor + sensor noise  

Fail  Fail time (s)  Fail  Fail time (s)  Fail  Fail time (s)  Fail  Fail time (s)  
Scalar BO  0.101  0%  0.669  100%  4.07  0.711  100%  3.59  0.259  0%  
MOBOGM  0.031  0%  0.026  0%  0.0259  0%  0.366  0% 
Surrogate models. For the nonrobust algorithm, we use a standard GP model with a zero mean prior and a Matérn kernel with
with automatic relevance determination (ARD). We set the hyperprior over the lengthscales to
and over the standard deviation to
. We use a Gaussian likelihood with no hyperprior. Similarly, for the robust algorithm, we use a zero prior mean. The correlation in the input space in the ICM model is captured by an ARD Matérn kernel with , with the same hyperprior as the nonrobust case. For the correlation in the output space, we set a Gaussian hyperprior over each entry of the matrix ,. We use a Gaussian likelihood with a diagonal covariance matrix. In both cases, we udpate the hyperparameters using a maximum a posteriori estimate after every new data point is acquired.
Training. In the simtoreal setting, we train 5 different controllers for each of these methods: scalar BO (nonrobust), MOBO with performance and delay margin (DM), and MOBO with performance and lower gain margin (GM). The training consists of 200 BO iterations evaluated in simulation. In the hardware training setting, we train one controller for scalar BO and one for MOBOGM using 70 BO iterations evaluated on hardware. In both settings, MOBO requires fewer iterations than the given budget to find satisfactory solutions. Thus, using a stopping criterion in Algorithm 1 would reduce the total number of iterations. We estimate performance by averaging the return over 10 independent runs. To estimate robustness, we require that the controller stabilizes the system for a given delay or gain for 5 independent runs. A trial is deemed stable if and for all . Every training run lasts for 5 seconds. Fig. 3 and 2 show the fronts obtained by the MOBODM and MOBOGM simtoreal training, respectively. The gray circles correspond to controllers that appeared stabilizing at first, but that were ruled out with longer simulations, cf. Sec. IIIB. The green squares indicate controllers tested on hardware. To emphasize the generality of our method, they were selected to be approximately at the elbow of the front without further tuning.
Simtoreal test. We test each controller learned in simulation on the hardware 5 times in 4 scenarios: (i) standard simtoreal, (ii) simtoreal adding Gaussian noise to the motor voltage, (iii) simtoreal adding noise to the encoder readings following a multinomial distribution over the integers in with and 0.05 everywhere else, and (iv) simtoreal with the pendulum mass increased by . A run is a failure if . In Table I, we compare the controllers in terms of average return, failure rate, and failing time, averaged over the runs that resulted in a failure. The robust methods consistently outperform the nonrobust policy optimization across all test scenarios. It appears that the lower gain margin is a more suitable robustness indicator in this setting. This may be due to the fact that, in our experience, the gain margin is less noisy to estimate.
Hardware test. We test each controller learned in hardware 5 times in 4 scenarios: (i) extra mass of , (ii) extra mass of , (iii) extra mass of and extra pendulum length of , and (iv) extra mass of with the actuation and sensor noise used in the simtoreal experiments. Table II summarizes the test results. Similarly to the simtoreal setting, the robust algorithm consistenlty outperforms its nonrobust counterpart.
V Concluding Remarks
We present a dataefficient algorithm for robust policy optimization based on multiobjective Bayesian optimization. We suggest a datadriven evaluation of two common robustness indicators, which is suitable to modelfree settings. Our hardware experiments on a Furuta pendulum show that (i) our method facilitates simulation to real transfer, and (ii) consistently increases robustness of the learned controllers as compared to BO with a single performance objective. Our results indicate a promising avenue toward robust learning control by leveraging robustness measures from control theory and multiobjective Bayesian optimization and point to several directions for extensions. While we show that gain and delay margings are effective in practice on a mildly nonlinear system, they may not fully characterize robust stability in general [33, 7]. Thus, investigating other relevant robustness indicators that can efficiently be estimated from data in a modelfree setting is a topic for future research. Also, using multiple robustness indicator simultaneously is relevant, which our method could do at the expense of a more complex scaling to balance robustness and performance.
References
 [1] (2012) Kernels for vectorvalued functions: a review. Foundations and Trends in Machine Learning 4 (3), pp. 195–266. Cited by: §IIIA.
 [2] (2007) Optimal control: linear quadratic methods. Courier Corporation. Cited by: §I.
 [3] (2017) Deep kernels for optimizing locomotion controllers. In Proceedings of the 1st Annual Conference on Robot Learning, Proceedings of Machine Learning Research, Vol. 78, pp. 47–56. Cited by: §I.

[4]
(201409)
Expensive multiobjective optimization for robotics with consideration of heteroscedastic noise
. In 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2230–2235. External Links: Document, ISBN 9781479969340 Cited by: §I.  [5] (2008) Feedback systems: an introduction for scientists and engineers. Princeton University Press, Princeton, NJ, USA. Cited by: §I, §IIIB.
 [6] (201605) Safe controller optimization for quadrotors with gaussian processes. In IEEE International Conference on Robotics and Automation (ICRA), Vol. , pp. 491–496. Cited by: §I, §IV.
 [7] (200706) The fundamental tradeoff between performance and robustness. IEEE Control Systems Magazine 27 (3), pp. 30–44. Cited by: §I, §I, §V.

[8]
(20160201)
Bayesian optimization for learning gaits under uncertainty.
Annals of Mathematics and Artificial Intelligence
76 (1), pp. 5–23. Cited by: §I.  [9] (2011) On the dynamics of the furuta pendulum. Journal of Control Science and Engineering 2011, pp. 3. Cited by: §IV.
 [10] (2013) Multiobjective optimization: Principles and case studies. Springer Science & Business Media. Cited by: §II.
 [11] (2010) Percentile optimization for markov decision processes with parameter uncertainty. Operations research 58 (1), pp. 203–213. Cited by: §I.
 [12] (1978) Guaranteed margins for LQG regulators. IEEE Transactions on Automatic Control 23 (4), pp. 756–757. Cited by: §I.
 [13] (2008) The computation of the expected improvement in dominated hypervolume of pareto front approximations. Rapport technique, Leiden University 34. Cited by: §I, §IIIA.
 [14] (2011) Multiobjective optimal control: an introduction. In Control Conference (ASCC), 2011 8th Asian, pp. 1084–1089. Cited by: §I.
 [15] (2012) Linear robust control. Courier Corporation. Cited by: §I.
 [16] (2016) Predictive entropy search for multiobjective bayesian optimization. In International Conference on Machine Learning, pp. 1492–1501. Cited by: §I.
 [17] (2017) Virtual vs. real: trading off simulations and physical experiments in reinforcement learning with bayesian optimization. In 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 1557–1563. Cited by: §I.
 [18] (201605) Automatic LQR tuning based on Gaussian process global optimization. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pp. 270–277. Cited by: §IV.
 [19] (1978) The application of bayesian methods for seeking the extremum. Towards global optimization 2 (117129), pp. 2. Cited by: §I, §IV.
 [20] (2001) Robust reinforcement learning. In Advances in Neural Information Processing Systems, pp. 1061–1067. Cited by: §I.
 [21] (2019) Dataefficient autotuning with bayesian optimization: an industrial control study. IEEE Transactions on Control Systems Technology. Cited by: §I.
 [22] (2005) Robust control of markov decision processes with uncertain transition matrices. Operations Research 53 (5), pp. 780–798. Cited by: §I.
 [23] (2017) Robust adversarial reinforcement learning. In International Conference on Machine Learning, pp. 2817–2826. Cited by: §I.
 [24] (2016) Qube servo 2  student workbook. Quanser Consulting Inc. (English). Cited by: §IV.
 [25] (2004) Gaussian processes in machine learning. In Advanced lectures on machine learning, pp. 63–71. Cited by: §IIIA.
 [26] (1993) Loop transfer recovery: analysis and design. SpringerVerlag. Cited by: §I.
 [27] (2016) Taking the human out of the loop: A review of Bayesian optimization. Proceedings of the IEEE 104 (1), pp. 148–175. Cited by: §I.
 [28] (2007) A robust markov game controller for nonlinear systems. Applied Soft Computing 7 (3), pp. 818–827. Cited by: §I.
 [29] (2018) Reinforcement learning: an introduction. MIT press. Cited by: §I.
 [30] (2013) Expensive multiobjective optimization for robotics. 2013 IEEE International Conference on Robotics and Automation, pp. 973–980. Cited by: §I.
 [31] (2019) Recovering robustness in modelfree reinforcement learning. In 2019 American Control Conference (ACC), pp. 4210–4216. Cited by: §I.
 [32] (1998) Essentials of robust control. Prentice Hall. Cited by: §I, §IIIB.
 [33] (1996) Robust and optimal control. Vol. 40, Prentice hall New Jersey. Cited by: §IIIB, §V.

[34]
(2016)
ePAL: An Active Learning Approach to the MultiObjective Optimization Problem
. Journal of Machine Learning Research 17 (104), pp. 1–32. Note: This paper generalizes the idea of maximizers from the SafeOpt paper to the multiobjective case. It uses hyperrectangles to define the uncertainty of the objective function. Based on this uncertatinty it can establish which point will or will not belong to an epsilon pareto set with high probability. It samples the point that have the largest radius in the hyperrectangle that defines the uncertainty. Cited by: §I.
Comments
There are no comments yet.