Non-local Optimization: Imposing Structure on Optimization Problems by Relaxation

11/11/2020
by   Nils Müller, et al.
0

In stochastic optimization, particularly in evolutionary computation and reinforcement learning, the optimization of a function f: Ω→ℝ is often addressed through optimizing a so-called relaxation θ∈Θ↦𝔼_θ(f) of f, where Θ resembles the parameters of a family of probability measures on Ω. We investigate the structure of such relaxations by means of measure theory and Fourier analysis, enabling us to shed light on the success of many associated stochastic optimization methods. The main structural traits we derive and that allow fast and reliable optimization of relaxations are the resemblance of optimal values of f, Lipschitzness of gradients, and convexity. We emphasize settings where f does not involve the latter structure, e.g., in the presence of (stochastic) disturbance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/06/2013

Logical Stochastic Optimization

We present a logical framework to represent and reason about stochastic ...
research
06/25/2017

A Unified Analysis of Stochastic Optimization Methods Using Jump System Theory and Quadratic Constraints

We develop a simple routine unifying the analysis of several important r...
research
06/26/2015

ASOC: An Adaptive Parameter-free Stochastic Optimization Techinique for Continuous Variables

Stochastic optimization is an important task in many optimization proble...
research
01/30/2022

SRKCD: a stabilized Runge-Kutta method for stochastic optimization

We introduce a family of stochastic optimization methods based on the Ru...
research
07/08/2020

On Entropic Optimization and Path Integral Control

This article is motivated by the question whether it is possible to solv...
research
03/20/2019

The importance of better models in stochastic optimization

Standard stochastic optimization methods are brittle, sensitive to steps...

Please sign up or login with your details

Forgot password? Click here to reset