Towards a Principled Learning Rate Adaptation for Natural Evolution Strategies

11/22/2021
by   Masahiro Nomura, et al.
0

Natural Evolution Strategies (NES) is a promising framework for black-box continuous optimization problems. NES optimizes the parameters of a probability distribution based on the estimated natural gradient, and one of the key parameters affecting the performance is the learning rate. We argue that from the viewpoint of the natural gradient method, the learning rate should be determined according to the estimation accuracy of the natural gradient. To do so, we propose a new learning rate adaptation mechanism for NES. The proposed mechanism makes it possible to set a high learning rate for problems that are relatively easy to optimize, which results in speeding up the search. On the other hand, in problems that are difficult to optimize (e.g., multimodal functions), the proposed mechanism makes it possible to set a conservative learning rate when the estimation accuracy of the natural gradient seems to be low, which results in the robust and stable search. The experimental evaluations on unimodal and multimodal functions demonstrate that the proposed mechanism works properly depending on a search situation and is effective over the existing method, i.e., using the fixed learning rate.

READ FULL TEXT
research
04/07/2023

CMA-ES with Learning Rate Adaptation: Can CMA-ES with Default Population Size Solve Multimodal and Noisy Problems?

The covariance matrix adaptation evolution strategy (CMA-ES) is one of t...
research
10/19/2022

Differentiable Self-Adaptive Learning Rate

Learning rate adaptation is a popular topic in machine learning. Gradien...
research
06/22/2023

Accelerated Training via Incrementally Growing Neural Networks using Variance Transfer and Learning Rate Adaptation

We develop an approach to efficiently grow neural networks, within which...
research
08/06/2023

Learning-Rate-Free Learning: Dissecting D-Adaptation and Probabilistic Line Search

This paper explores two recent methods for learning rate optimisation in...
research
07/08/2020

AutoLR: An Evolutionary Approach to Learning Rate Policies

The choice of a proper learning rate is paramount for good Artificial Ne...
research
01/10/2020

Tangent-Space Gradient Optimization of Tensor Network for Machine Learning

The gradient-based optimization method for deep machine learning models ...
research
03/22/2020

Tune smarter not harder: A principled approach to tuning learning rates for shallow nets

Effective hyper-parameter tuning is essential to guarantee the performan...

Please sign up or login with your details

Forgot password? Click here to reset