Mirror Natural Evolution Strategies

08/01/2023
by   Haishan Ye, et al.
0

The zeroth-order optimization has been widely used in machine learning applications. However, the theoretical study of the zeroth-order optimization focus on the algorithms which approximate (first-order) gradients using (zeroth-order) function value difference at a random direction. The theory of algorithms which approximate the gradient and Hessian information by zeroth-order queries is much less studied. In this paper, we focus on the theory of zeroth-order optimization which utilizes both the first-order and second-order information approximated by the zeroth-order queries. We first propose a novel reparameterized objective function with parameters (μ, Σ). This reparameterized objective function achieves its optimum at the minimizer and the Hessian inverse of the original objective function respectively, but with small perturbations. Accordingly, we propose a new algorithm to minimize our proposed reparameterized objective, which we call (mirror descent natural evolution strategy). We show that the estimated covariance matrix of converges to the inverse of Hessian matrix of the objective function with a convergence rate 𝒪(1/k), where k is the iteration number and 𝒪(·) hides the constant and log terms. We also provide the explicit convergence rate of and how the covariance matrix promotes the convergence rate.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/06/2020

Convergence Analysis of the Hessian Estimation Evolution Strategy

The class of algorithms called Hessian Estimation Evolution Strategies (...
research
04/29/2014

Fast Approximation of Rotations and Hessians matrices

A new method to represent and approximate rotation matrices is introduce...
research
09/29/2020

Mathematical derivation for Vora-Value based filter design method: Gradient and Hessian

In this paper, we present the detailed mathematical derivation of the gr...
research
12/19/2011

Evolutionary Hessian Learning: Forced Optimal Covariance Adaptive Learning (FOCAL)

The Covariance Matrix Adaptation Evolution Strategy (CMA-ES) has been th...
research
06/18/2020

Improving the Convergence Rate of One-Point Zeroth-Order Optimization using Residual Feedback

Many existing zeroth-order optimization (ZO) algorithms adopt two-point ...
research
02/19/2018

Matrix Exponential Learning for Resource Allocation with Low Informational Exchange

We consider a distributed resource allocation problem in a multicarrier ...
research
01/05/2023

Restarts subject to approximate sharpness: A parameter-free and optimal scheme for first-order methods

Sharpness is an almost generic assumption in continuous optimization tha...

Please sign up or login with your details

Forgot password? Click here to reset