Meta-Learning for Black-box Optimization

07/16/2019
by   Vishnu TV, et al.
0

Recently, neural networks trained as optimizers under the "learning to learn" or meta-learning framework have been shown to be effective for a broad range of optimization tasks including derivative-free black-box function optimization. Recurrent neural networks (RNNs) trained to optimize a diverse set of synthetic non-convex differentiable functions via gradient descent have been effective at optimizing derivative-free black-box functions. In this work, we propose RNN-Opt: an approach for learning RNN-based optimizers for optimizing real-parameter single-objective continuous functions under limited budget constraints. Existing approaches utilize an observed improvement based meta-learning loss function for training such models. We propose training RNN-Opt by using synthetic non-convex functions with known (approximate) optimal values by directly using discounted regret as our meta-learning loss function. We hypothesize that a regret-based loss function mimics typical testing scenarios, and would therefore lead to better optimizers compared to optimizers trained only to propose queries that improve over previous queries. Further, RNN-Opt incorporates simple yet effective enhancements during training and inference procedures to deal with the following practical challenges: i) Unknown range of possible values for the black-box function to be optimized, and ii) Practical and domain-knowledge based constraints on the input parameters. We demonstrate the efficacy of RNN-Opt in comparison to existing methods on several synthetic as well as standard benchmark black-box functions along with an anonymized industrial constrained optimization problem.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/05/2021

Meta Learning Black-Box Population-Based Optimizers

The no free lunch theorem states that no model is better suited to every...
research
11/11/2016

Learning to Learn without Gradient Descent by Gradient Descent

We learn recurrent neural network optimizers trained on simple synthetic...
research
05/28/2023

Learning to Learn from APIs: Black-Box Data-Free Meta-Learning

Data-free meta-learning (DFML) aims to enable efficient learning of new ...
research
04/30/2023

META-SMGO-Δ: similarity as a prior in black-box optimization

When solving global optimization problems in practice, one often ends up...
research
06/08/2021

Reinforced Few-Shot Acquisition Function Learning for Bayesian Optimization

Bayesian optimization (BO) conventionally relies on handcrafted acquisit...
research
07/11/2023

Differential Analysis of Triggers and Benign Features for Black-Box DNN Backdoor Detection

This paper proposes a data-efficient detection method for deep neural ne...
research
08/10/2023

Zero Grads Ever Given: Learning Local Surrogate Losses for Non-Differentiable Graphics

Gradient-based optimization is now ubiquitous across graphics, but unfor...

Please sign up or login with your details

Forgot password? Click here to reset