In reinforcement learning (RL) , the goal is to learn a controller to perform a desired task from the data produced by the interaction between the learning agent and its environment. In this framework, autonomous agents are trained to maximize their return. It is common to assume that such agents will be deployed in conditions that are similar, if not equal, to those they were trained in. In this case, a return-maximizing agent performs well at test time. However, in real world applications, this assumption may be violated. For example, in robotics, we can use RL to learn to fly a drone indoor. However, later on we may use the same drone to carry a payload in a windy environment. The new environmental conditions and the possible deterioration of the drone components due to their usage may result in a poor, if not catastrophic, performance of the learned controller. Another scenario where training and testing conditions differ substantially is the sim-to-real setting, i.e., when we deploy a controller trained in simulation on a real-world agent.
Considering robustness alongside performance when learning a controller can limit performance degradation due to different training and testing environments. In special cases, these goals may be aligned, and a high-performing controller can also be robust. This is the case for the Linear Quadratic Regulator (LQR), a linear state-feedback controller that is optimal for the case of linear dynamics, quadratic cost, and perfect state measurements. It is well-known that the LQR exhibits strong robustness indicators, such as gain and phase margins . While performance and robustness go hand in hand for the LQR, they are often conflicting in other cases. For example, a celebrated result in control theory shows that the Linear Quadratic Gaussian (LQG) regulator - the noisy counterpart of the LQR - can be arbitrarily close to instability, despite being optimal . Thus, in general, we need to trade-off between performance and robustness .
Contributions. While many works investigating the performance/robustness trade-off exist in both the RL and control theory literature for the model-based setting, few results are known for the model-free scenario. However, there are several real-world scenarios where models are not available, inaccurate, or too expensive to use, but robustness is fundamental. Thus, in this paper, we introduce the first data-efficient, robust, model-free RL method based on policy optimization with multi-objective Bayesian optimization (MOBO). In particular, these are our contributions:
We formulate the robust, model-free RL as a multi-objective optimization problem.
We propose a model-free, data-driven evaluation of delay and gain margins, two common robustness indicators from the model-based setting (where they are computed analytically).
We solve this problem efficiently with expected hypervolume improvement (EHI).
We introduce the first method that can learn robust controllers directly on hardware in a model-free fashion.
We show how our approach outperforms non-robust policy optimization in evaluations on a Furuta pendulum for both a sim-to-real and a pure hardware setting.
Related work. Robustness has been widely investigated in control theory , and standard robust control techniques for linear systems include loop transfer recovery , control, and synthesis [32, 15]
. However, these methods typically assume the availability of a model, and none of these includes a learning component. Recently, robustness has drawn attention in data-driven settings, giving rise to the field of robust, model-based RL. Robust Markov decision processes study the RL problem when the transition model is subject to known and bounded uncertainties. For example, studies the dynamic programming recursion in this setting. Other methods that consider parametric uncertainties include [28, 11]. All the previous methods are model-based.
Robustness and performance are typical objectives in control design, which often conflict each other, thus requiring design trade-offs [7, 5]. In the model free literature, this trade-off is often fixed a priori and the resulting problem is solved with standard optimization methods. In  a weighted cost that balances performance and robustness is optimized. In  robust controllers are learned via gradient ascent with random multiplicative noise on the control action. In [23, 20] external, adversarial disturbances are used instead. In these works, the upper bound on the magnitude of the disturbance implicitly balances robustness and performance. However, setting this trade-off is often not intuitive and, in case the requirements are misspecified or updated, a new controller must be learned. Alternatively, robust control design methods based on multi-objective optimization explore the spectrum of such trade-offs. The work in 
gives a review of such methods, with a focus on genetic algorithms, which, due to their low data efficiency require the model to compute the robustness indices.
Model-free RL algorithms are typically validated in simulations due to their high sample complexity. However, in robotics, it is crucial to test these methods on hardware. Bayesian optimization (BO) [19, 27] has been successfully applied to learn low-dimensional controllers for hardware systems. For example,  learns to control the -coordinate for a quadrotor hovering task with a linear controller,  learns a linear state feedback controller for a cart-pole system in a sim-to-real setting and [8, 3] tune the parameters of ad-hoc controllers for locomotion tasks. However, none of these methods considers robustness, making ours the first one to learn robust controllers from data directly on hardware.
. They have been applied to several tasks including trading off prediction speed and accuracy in machine learning models. However, they have rarely been applied to RL. To the best of our knowledge, this has been done only in[30, 4], where a trade-off between frontal camera movement and forward speed is found for a snake-like robot, for homoschedastic and heteroschedastic noise respectively. Robustness is not explicitly treated in these works.
Ii Problem Statement
In this section, we introduce our formulation of robust model-free RL as a multi-objective optimization problem. For ease of exposition, we limit ourselves to two objectives. However, this approach naturally extends to the any number of objectives, for example, multiple robustness indicators.
We assume we have a system with unknown dynamics, , and unknown observation model, ,
where is the state, is the control input, is the observation and and are the process and sensor noise. An RL agent aims at learning a controller , i.e., a mapping parametrized by from an observation to an action that allows it to complete its task. Policy optimization algorithms are a class of model-free RL methods that solve this problem by optimizing the performance of a given controller for the task at hand as a function of the parameters . Concretely, given a performance metric , standard, non-robust policy optimization algorithms aim to find . In this work, we consider regulation tasks, i.e., bringing and keeping the system in a desired goal state . This includes common problems like stabilization, set-point tracking, or disturbance rejection. The performance indicator encodes these objectives.
To extend this framework to the robustness-aware case, we use a second function that measures the robustness of a controller. Since both the dynamics and the observation model are unknown, we must evaluate or approximate the value of from data. In Sec. III-B, we introduce the gain and the delay margin, two alternatives for that are commonly used in model-based control and we discuss how to evaluate them in the model-free setting.
We aim at finding the best controller in terms of performance and robustness, as measured by and . However, since we compare controllers based on multiple, and possibly conflicting, criteria, we cannot define a single best controller. Given a controller , we denote with the array containing its performance and robustness values. To compare two controllers and , we use the canonical partial order over : iff for . This induces a relation in the controller space : iff . If , we say that dominates . The Pareto set is the set of non-dominated points in the domain, i.e., iff such that for all . The Pareto front is the set of function values corresponding to the Pareto set. The Pareto set is optimal in the sense that, for each point in it, it is not possible to find another point in the domain that improves the value of one objective without degrading another . The goal of this paper is to approximate from data.
Fig. 1 represent our problem graphically: we suggest a controller, we evaluate its performance and robustness on the system and we select a new controller based on these observations to find an approximation of the Pareto front.
Iii Learning the Performance-robustness Trade-off
For the robust, model-free RL setting we consider, we propose to learn the Pareto front characterizing the performance-robustness trade-off of a given system with MOBO. Here, we describe the necessary components to solve our problem in a data efficient way: MOBO and the robustness and performance indicators used in our experiments. Moreover, we discuss how to evaluate such indicators from data in a model-free fashion.
Iii-a Multi-objective Bayesian optimization
MOBO algorithms solve multi-objective optimization problems by sequentially querying the objective at different inputs and obtaining noisy evaluations of the corresponding values. They build a statistical model of the objectives to capture the belief over them given the data available. They measure how informative a point in the domain is about the problem solution with an acquisition function. At every iteration, they evaluate the objective at the most informative point, as measured by the acquisition function. Thus, the complex multi-objective optimization problem is decomposed into a sequence of simpler scalar-valued optimization problems. In the following, we describe the surrogate model and the acquisition function used in this work.
Intrinsic Model of Coregionalization A single-output Gaussian process (GP) 
is a probability distribution over the space of functions of the form, which, w.l.o.g., is usually assumed to be zero, for all , and a covariance function, or kernel, . The kernel encodes the strength of statistical correlation between two latent function values and, therefore, it expresses our prior belief about the function behavior.
Similarly, a -output GP is a probability distribution over the space of functions of the form . The difference with respect to single-output GPs is that, in this case, the kernel must capture the correlation across different output dimensions in addition to the correlation of function values at different inputs. The simplest way of doing this is by assuming that each output is independent. However, this model disregards the fundamental trade-off between robustness and performance that we are considering. For a review on kernels for multi-output GPs, see . In this work, we use the intrinsic model of coregionalization (ICM), which defines the covariance between the value of and the value of by separating the input and the output contribution as follows, . In this case, we say , where is a -dimensional mean function, is a scalar-valued kernel and is a matrix describing the correlation in the output space (more details on in Sec. IV). Given noisy observations of , , with , where is i.i.d. Gaussian noise, we can compute the posterior distribution of the function values conditioned on at a target input in closed form as . We denote with the inputs contained in and with the matrix with entries for and , then
where , with denoting the Kronecker product, has entries for and and is the
-dimensional vector containing the concatenation of the observations in.
Expected Hypervolume Improvement EHI is an acquisition function introduced in , which selects inputs to evaluate based on a notion of improvement with respect to the incumbent solution. In multi-objective optimization, incumbent solutions take the form of approximations of the Pareto set, , whose quality is measured by the hypervolume indicator induced by the corresponding front, with respect to a reference . Formally, the hypervolume indicator of a set of points with respect to a reference , , is the Lebesgue measure of the hypervolume covered by the boxes that have an element in as upper corner and the reference as lower corner. It quantifies the size of the portion of the output space that is Pareto-dominated by the points in . Given an estimate of the Pareto front, , the hypervolume improvement of is defined as the relative improvement in hypervolume obtained by adding the function value at , , to , However, we do not know . Instead, we have a belief over its value expressed by the posterior distribution of the GP, which, in turn, induces a distribution over the hypervolume improvement corresponding to an input . The EHI acquisition function quantifies the informativeness of an input toward the solution of the multi-objective optimization problem through the expectation of this distribution,
œ¬In general, robustness can have very different meanings. One may desire to ensure robustness to a certain class of disturbances, imperfections in the control system, or uncertainty in the process, for example. In control theory, the latter is often understood as robustness in the stricter sense. Specifically, robust stability assures that a controller stabilizes every member from a set of uncertain processes . Such processes can, for example, be defined through a nominal process and variations thereof. Different variations lead to different robustness characterizations. Likewise, there are different notions of stability that are meaningful depending on the context. For example, for a deterministic system, asymptotic stability, i.e., as , where is an equilibrium of the system, is often used; for systems that are continuously excited, e.g., through noise, and thus cannot approach , one may seek the above limit to hold in expectation or practical stability in the sense of a bounded state, i.e., for all . A controller is unstable when the respective condition does not hold (e.g., no asymptotic convergence, or grows beyond any bounds).
While many sophisticated robustness metrics have been developed, stability margins such as gain and delay margins are some of the most common and intuitive ones [5, Sec. 9.3]. We consider these in this work and comment on alternatives in Sec. V. Below, we formally introduce them and we explain how to evaluate them in a model-free setting. Notice that, our data-driven definitions can be extended to any setting where a success/failure outcome can be defined and, therefore, are not limited to stability considerations.
Gain margin. In classical control, the upper (lower) gain margin is defined for single-input-single-output (SISO) linear systems as the largest factor (the smallest factor ) that can multiply the open-loop transfer function so that the closed-loop system is stable [33, Sec. 9.5]. As the open-loop transfer function encodes both the process and the controller dynamics, the factor may represent uncertainty in the process gain or the actuator efficiency, for example. In this work, we consider a factor to be multiplied by the control action (i.e., ), which is equivalent to the definition for linear SISO systems, but can also be used for nonlinear ones. It quantifies how much we can lower/amplify the control action before making the system unstable. In a way, it quantifies how “far” we are from instability and, thus, how much we can tolerate differences between training and testing.
Delay margin. Similarly, we define the delay margin as the largest time delay on the measurement such that the controlled system is still stable. Formally, it is the largest value of such that the closed-loop system with the delayed control action is stable. As delay in data transmission between sensor, controller, and actuator, and in the control computation are present in most control systems, the delay margin is a very relevant measure.
Estimate from data. While the indicators above can be readily computed for linear systems, they are difficult to compute analytically if the model is nonlinear, or impossible if no model is available, as considered herein. We describe an experiment to estimate the delay margin from data in a model-free setting (those for the gain margins are analogous). For general non-linear systems, stability with respect to an equilibrium is a local property. Thus, we assume we can reset the system to a state in the neighborhood of the equilibrium of interest, i.e., we have , where is a ball centered at of radius . We can establish whether the delay margin, denoted with , is larger or smaller than a delay by resetting the system near , deploying the delayed controller and evaluating the stability of the resulting trajectory.
In practice, two problems arise with this approach: (i) we can evaluate a finite number of delays with a finite number of experiments; and (ii) while stability is an asymptotic condition on the state, we do not know the state and we run finite experiments. The first problem requires us to select carefully the delays we evaluate. We know that increasing values of delay take a stable system closer to instability. Thus, given delays , we do a binary search to find the largest one for which the closed-loop system is stable. This allows us to approximate the delay margin with experiments. Not knowing the state can be solved by estimating it from the noisy sensor measurements or by introducing a new definition of stability based on rather than . Concerning the finite trajectories we note that, in practical cases, it is rare to have small compounding deviations from resulting in a divergent behavior emerging only in the long run. Often, a controller makes the system converge to or diverge from within a short amount of time. In our experiments, we say that a controller stabilizes the system if, after a burn-in time that accounts for the transient behavior, it keeps the state within a box around . Controllers with good margins are investigated further with longer experiments to eliminate potential outliers due to the finite trajectory issue.
Reliably estimating robustness indicators and stability of a system without a model is challenging. The estimation technique we presented is intuitive and easy to implement. While it does not provide formal guarantees on the estimation error, we show in Sec. IV that it is accurate enough to greatly improve the robustness of our algorithm with respect to the non-robust policy optimization baseline.
In this section, we describe our robust policy optimization algorithm, for the pseudocode see Algorithm 1. At iteration , we select the controller that maximizes the EHI criterion. Then, we run two experiments to estimate its performance and robustness. For the performance, we introduce a state and action dependent reward and we define the return as the average reward obtained over an episode. The performance index is defined as the expectation of the return, which we approximate with a Monte Carlo estimate over multiple episodes. To estimate the robustness, we use the experiments from Sec. III-B. We update the data set with the experiments results. Finally, we update the estimate of the Pareto front that is used to compute the EHI as the set of dominating points of the data set. Other options to compute such estimate from the posterior of the GP exist. However, they are computationally more expensive and they resulted in a similar performance in our experiments. In the end, the algorithm returns an estimate of the Pareto set and front. The choice of a controller from the Pareto set depends on the performance-robustness trade-off required by the test applications and, therefore, the choice is left to the practitioner.
Iv Experimental Results
We compare the robust policy optimization algorithm in Algorithm 1 to its non-robust counterpart based on scalar BO as, e.g., in [6, 18]. We use the scalar equivalent of EHI for the non-robust case, i.e., the expected improvement (EI) algorithm . We present two set of experiments: training controllers in simulation and directly on hardware, respectively. In both cases, the learned controllers are tested on the hardware in a set of different conditions.
System. We learn a controller for a Furuta pendulum  (see Figure 1), a system that is closely related to the well-known cart pole. It replaces the cart with a rotary arm that rotates on the horizontal plane. In our experiments, we use the Qube Servo 2 by Quanser , a small scale Furuta pendulum. It uses a brushless DC motor to exert a torque on the rotary arm, and it is equipped with two incremental optical encoders with 2048 counts per revolution to measure the angle of the rotary arm and the pendulum. For sim-to-real, we use the dynamics model provided in the Qube Servo 2 manual , which is a non-linear rigid body model. A more detailed model is presented in .
Controller. We consider a state feedback controller to stabilize the pendulum about the vertical equilibrium. The system has four states, : the angular position of the rotary arm and the pendulum, , with being the vertical position, and the corresponding angular velocities, . We control the voltage applied to the motor, . We use the encoder readings as estimates of the angular positions, . We apply a low-pass filter to the difference of consecutive angular positions to estimate the angular velocities, . We aim to find a controller of the form .
Scaling and reward. We define a state and action dependent reward as the negative of a quadratic cost, with and . The performance associated to a controller is the expected average reward it induces on the system, that is, for a trajectory of duration , . To prevent one of the objectives from dominating the contribution to the hypervolume improvement in the EHI algorithm, we must normalize them. We control the range of the robustness indicators, see Sec. III-B, and, therefore it is easy to rescale them to the range. We observe empirically that the unnormalized return ranges in . Thus, we clip every return value to this range and we rescale it to the interval. Since the pendulum incurs substantially different returns when a stabilizing or destabilizing controller is used, we cannot rescale the range linearly. Instead, we use a piece-wise linear function. In particular, since we observe empirically that stabilizing controllers have a performance between -20 and 0, we rescale linearly the range to and the range to . This differentiates coarsely the quality of unstable controllers, and it gives a more refined scale over stable ones.
|Standard||Motor noise||Sensor noise||Add|
|Fail||Fail time (s)||Fail||Fail time (s)||Fail||Fail time (s)||Fail||Fail time (s)|
|Add||Add||Add and||Add , motor + sensor noise|
|Fail||Fail time (s)||Fail||Fail time (s)||Fail||Fail time (s)||Fail||Fail time (s)|
Surrogate models. For the non-robust algorithm, we use a standard GP model with a zero mean prior and a Matérn kernel with
with automatic relevance determination (ARD). We set the hyperprior over the lengthscales to
and over the standard deviation to. We use a Gaussian likelihood with no hyperprior. Similarly, for the robust algorithm, we use a zero prior mean. The correlation in the input space in the ICM model is captured by an ARD Matérn kernel with , with the same hyperprior as the non-robust case. For the correlation in the output space, we set a Gaussian hyperprior over each entry of the matrix ,
Training. In the sim-to-real setting, we train 5 different controllers for each of these methods: scalar BO (non-robust), MOBO with performance and delay margin (DM), and MOBO with performance and lower gain margin (GM). The training consists of 200 BO iterations evaluated in simulation. In the hardware training setting, we train one controller for scalar BO and one for MOBO-GM using 70 BO iterations evaluated on hardware. In both settings, MOBO requires fewer iterations than the given budget to find satisfactory solutions. Thus, using a stopping criterion in Algorithm 1 would reduce the total number of iterations. We estimate performance by averaging the return over 10 independent runs. To estimate robustness, we require that the controller stabilizes the system for a given delay or gain for 5 independent runs. A trial is deemed stable if and for all . Every training run lasts for 5 seconds. Fig. 3 and 2 show the fronts obtained by the MOBO-DM and MOBO-GM sim-to-real training, respectively. The gray circles correspond to controllers that appeared stabilizing at first, but that were ruled out with longer simulations, cf. Sec. III-B. The green squares indicate controllers tested on hardware. To emphasize the generality of our method, they were selected to be approximately at the elbow of the front without further tuning.
Sim-to-real test. We test each controller learned in simulation on the hardware 5 times in 4 scenarios: (i) standard sim-to-real, (ii) sim-to-real adding Gaussian noise to the motor voltage, (iii) sim-to-real adding noise to the encoder readings following a multinomial distribution over the integers in with and 0.05 everywhere else, and (iv) sim-to-real with the pendulum mass increased by . A run is a failure if . In Table I, we compare the controllers in terms of average return, failure rate, and failing time, averaged over the runs that resulted in a failure. The robust methods consistently outperform the non-robust policy optimization across all test scenarios. It appears that the lower gain margin is a more suitable robustness indicator in this setting. This may be due to the fact that, in our experience, the gain margin is less noisy to estimate.
Hardware test. We test each controller learned in hardware 5 times in 4 scenarios: (i) extra mass of , (ii) extra mass of , (iii) extra mass of and extra pendulum length of , and (iv) extra mass of with the actuation and sensor noise used in the sim-to-real experiments. Table II summarizes the test results. Similarly to the sim-to-real setting, the robust algorithm consistenlty outperforms its non-robust counterpart.
V Concluding Remarks
We present a data-efficient algorithm for robust policy optimization based on multi-objective Bayesian optimization. We suggest a data-driven evaluation of two common robustness indicators, which is suitable to model-free settings. Our hardware experiments on a Furuta pendulum show that (i) our method facilitates simulation to real transfer, and (ii) consistently increases robustness of the learned controllers as compared to BO with a single performance objective. Our results indicate a promising avenue toward robust learning control by leveraging robustness measures from control theory and multi-objective Bayesian optimization and point to several directions for extensions. While we show that gain and delay margings are effective in practice on a mildly nonlinear system, they may not fully characterize robust stability in general [33, 7]. Thus, investigating other relevant robustness indicators that can efficiently be estimated from data in a model-free setting is a topic for future research. Also, using multiple robustness indicator simultaneously is relevant, which our method could do at the expense of a more complex scaling to balance robustness and performance.
-  (2012) Kernels for vector-valued functions: a review. Foundations and Trends in Machine Learning 4 (3), pp. 195–266. Cited by: §III-A.
-  (2007) Optimal control: linear quadratic methods. Courier Corporation. Cited by: §I.
-  (2017) Deep kernels for optimizing locomotion controllers. In Proceedings of the 1st Annual Conference on Robot Learning, Proceedings of Machine Learning Research, Vol. 78, pp. 47–56. Cited by: §I.
Expensive multiobjective optimization for robotics with consideration of heteroscedastic noise. In 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2230–2235. External Links: Cited by: §I.
-  (2008) Feedback systems: an introduction for scientists and engineers. Princeton University Press, Princeton, NJ, USA. Cited by: §I, §III-B.
-  (2016-05) Safe controller optimization for quadrotors with gaussian processes. In IEEE International Conference on Robotics and Automation (ICRA), Vol. , pp. 491–496. Cited by: §I, §IV.
-  (2007-06) The fundamental tradeoff between performance and robustness. IEEE Control Systems Magazine 27 (3), pp. 30–44. Cited by: §I, §I, §V.
Bayesian optimization for learning gaits under uncertainty.
Annals of Mathematics and Artificial Intelligence76 (1), pp. 5–23. Cited by: §I.
-  (2011) On the dynamics of the furuta pendulum. Journal of Control Science and Engineering 2011, pp. 3. Cited by: §IV.
-  (2013) Multiobjective optimization: Principles and case studies. Springer Science & Business Media. Cited by: §II.
-  (2010) Percentile optimization for markov decision processes with parameter uncertainty. Operations research 58 (1), pp. 203–213. Cited by: §I.
-  (1978) Guaranteed margins for LQG regulators. IEEE Transactions on Automatic Control 23 (4), pp. 756–757. Cited by: §I.
-  (2008) The computation of the expected improvement in dominated hypervolume of pareto front approximations. Rapport technique, Leiden University 34. Cited by: §I, §III-A.
-  (2011) Multi-objective optimal control: an introduction. In Control Conference (ASCC), 2011 8th Asian, pp. 1084–1089. Cited by: §I.
-  (2012) Linear robust control. Courier Corporation. Cited by: §I.
-  (2016) Predictive entropy search for multi-objective bayesian optimization. In International Conference on Machine Learning, pp. 1492–1501. Cited by: §I.
-  (2017) Virtual vs. real: trading off simulations and physical experiments in reinforcement learning with bayesian optimization. In 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 1557–1563. Cited by: §I.
-  (2016-05) Automatic LQR tuning based on Gaussian process global optimization. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pp. 270–277. Cited by: §IV.
-  (1978) The application of bayesian methods for seeking the extremum. Towards global optimization 2 (117-129), pp. 2. Cited by: §I, §IV.
-  (2001) Robust reinforcement learning. In Advances in Neural Information Processing Systems, pp. 1061–1067. Cited by: §I.
-  (2019) Data-efficient auto-tuning with bayesian optimization: an industrial control study. IEEE Transactions on Control Systems Technology. Cited by: §I.
-  (2005) Robust control of markov decision processes with uncertain transition matrices. Operations Research 53 (5), pp. 780–798. Cited by: §I.
-  (2017) Robust adversarial reinforcement learning. In International Conference on Machine Learning, pp. 2817–2826. Cited by: §I.
-  (2016) Qube servo 2 - student workbook. Quanser Consulting Inc. (English). Cited by: §IV.
-  (2004) Gaussian processes in machine learning. In Advanced lectures on machine learning, pp. 63–71. Cited by: §III-A.
-  (1993) Loop transfer recovery: analysis and design. Springer-Verlag. Cited by: §I.
-  (2016) Taking the human out of the loop: A review of Bayesian optimization. Proceedings of the IEEE 104 (1), pp. 148–175. Cited by: §I.
-  (2007) A robust markov game controller for nonlinear systems. Applied Soft Computing 7 (3), pp. 818–827. Cited by: §I.
-  (2018) Reinforcement learning: an introduction. MIT press. Cited by: §I.
-  (2013) Expensive multiobjective optimization for robotics. 2013 IEEE International Conference on Robotics and Automation, pp. 973–980. Cited by: §I.
-  (2019) Recovering robustness in model-free reinforcement learning. In 2019 American Control Conference (ACC), pp. 4210–4216. Cited by: §I.
-  (1998) Essentials of robust control. Prentice Hall. Cited by: §I, §III-B.
-  (1996) Robust and optimal control. Vol. 40, Prentice hall New Jersey. Cited by: §III-B, §V.
e-PAL: An Active Learning Approach to the Multi-Objective Optimization Problem. Journal of Machine Learning Research 17 (104), pp. 1–32. Note: This paper generalizes the idea of maximizers from the SafeOpt paper to the multi-objective case. It uses hyperrectangles to define the uncertainty of the objective function. Based on this uncertatinty it can establish which point will or will not belong to an epsilon pareto set with high probability. It samples the point that have the largest radius in the hyperrectangle that defines the uncertainty. Cited by: §I.