Linear interpolation gives better gradients than Gaussian smoothing in derivative-free optimization

05/29/2019
by   Albert S. Berahas, et al.
0

In this paper, we consider derivative free optimization problems, where the objective function is smooth but is computed with some amount of noise, the function evaluations are expensive and no derivative information is available. We are motivated by policy optimization problems in reinforcement learning that have recently become popular [Choromaski et al. 2018; Fazel et al. 2018; Salimans et al. 2016], and that can be formulated as derivative free optimization problems with the aforementioned characteristics. In each of these works some approximation of the gradient is constructed and a (stochastic) gradient method is applied. In [Salimans et al. 2016] the gradient information is aggregated along Gaussian directions, while in [Choromaski et al. 2018] it is computed along orthogonal direction. We provide a convergence rate analysis for a first-order line search method, similar to the ones used in the literature, and derive the conditions on the gradient approximations that ensure this convergence. We then demonstrate via rigorous analysis of the variance and by numerical comparisons on reinforcement learning tasks that the Gaussian sampling method used in [Salimans et al. 2016] is significantly inferior to the orthogonal sampling used in [Choromaski et al. 2018] as well as more general interpolation methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/20/2017

Discretization-free Knowledge Gradient Methods for Bayesian Optimization

This paper studies Bayesian ranking and selection (R&S) problems with co...
research
04/09/2021

Learning Sampling Policy for Faster Derivative Free Optimization

Zeroth-order (ZO, also known as derivative-free) methods, which estimate...
research
10/22/2015

Generalized conditional gradient: analysis of convergence and applications

The objectives of this technical report is to provide additional results...
research
08/10/2020

Self-accelerating root search and optimisation methods based on rational interpolation

Iteration methods based on barycentric rational interpolation are derive...
research
11/18/2021

Improved rates for derivative free play in convex games

The influential work of Bravo et al. 2018 shows that derivative free pla...
research
04/06/2018

Structured Evolution with Compact Architectures for Scalable Policy Optimization

We present a new method of blackbox optimization via gradient approximat...
research
01/07/2020

Highly Efficient Feasible Direction Method (HEFDiM) for Structural Topology Optimization

Feasible Direction Method (FDM) is a concise yet rigorous mathematical m...

Please sign up or login with your details

Forgot password? Click here to reset