DeepAI AI Chat
Log In Sign Up

Global Continuous Optimization with Error Bound and Fast Convergence

07/17/2016
by   Kenji Kawaguchi, et al.
Japan Atomic Energy Agency
MIT
0

This paper considers global optimization with a black-box unknown objective function that can be non-convex and non-differentiable. Such a difficult optimization problem arises in many real-world applications, such as parameter tuning in machine learning, engineering design problem, and planning with a complex physics simulator. This paper proposes a new global optimization algorithm, called Locally Oriented Global Optimization (LOGO), to aim for both fast convergence in practice and finite-time error bound in theory. The advantage and usage of the new algorithm are illustrated via theoretical analysis and an experiment conducted with 11 benchmark test functions. Further, we modify the LOGO algorithm to specifically solve a planning problem via policy search with continuous state/action space and long time horizon while maintaining its finite-time error bound. We apply the proposed planning method to accident management of a nuclear power plant. The result of the application study demonstrates the practical utility of our method.

READ FULL TEXT
12/21/2020

Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box Optimization Framework

In this work, we focus on the study of stochastic zeroth-order (ZO) opti...
01/17/2023

Noisy, Non-Smooth, Non-Convex Estimation of Moment Condition Models

A practical challenge for structural estimation is the requirement to ac...
05/13/2021

Value-at-Risk Optimization with Gaussian Processes

Value-at-risk (VaR) is an established measure to assess risks in critica...
07/06/2021

A Fitness Landscape View on the Tuning of an Asynchronous Master-Worker EA for Nuclear Reactor Design

In the context of the introduction of intermittent renewable energies, w...