Convergence Error Analysis of Reflected Gradient Langevin Dynamics for Globally Optimizing Non-Convex Constrained Problems

03/19/2022
by   Kanji Sato, et al.
0

Non-convex optimization problems have various important applications, whereas many algorithms have been proven only to converge to stationary points. Meanwhile, gradient Langevin dynamics (GLD) and its variants have attracted increasing attention as a framework to provide theoretical convergence guarantees for a global solution in non-convex settings. The studies on GLD initially treated unconstrained convex problems and very recently expanded to convex constrained non-convex problems by Lamperski (2021). In this work, we can deal with non-convex problems with some kind of non-convex feasible region. This work analyzes reflected gradient Langevin dynamics (RGLD), a global optimization algorithm for smoothly constrained problems, including non-convex constrained ones, and derives a convergence rate to a solution with ϵ-sampling error. The convergence rate is faster than the one given by Lamperski (2021) for convex constrained cases. Our proofs exploit the Poisson equation to effectively utilize the reflection for the faster convergence rate.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/10/2023

Invex Programs: First Order Algorithms and Their Convergence

Invex programs are a special kind of non-convex problems which attain gl...
research
07/01/2016

Convergence Rate of Frank-Wolfe for Non-Convex Objectives

We give a simple proof that the Frank-Wolfe algorithm obtains a stationa...
research
11/06/2021

AGGLIO: Global Optimization for Locally Convex Functions

This paper presents AGGLIO (Accelerated Graduated Generalized LInear-mod...
research
02/13/2018

Fast Global Convergence via Landscape of Empirical Loss

While optimizing convex objective (loss) functions has been a powerhouse...
research
05/23/2018

Interior Point Methods with Adversarial Networks

We present a new methodology, called IPMAN, that combines interior point...
research
01/30/2023

Reweighted Interacting Langevin Diffusions: an Accelerated Sampling Methodfor Optimization

We proposed a new technique to accelerate sampling methods for solving d...
research
02/13/2016

Convex Optimization for Linear Query Processing under Approximate Differential Privacy

Differential privacy enables organizations to collect accurate aggregate...

Please sign up or login with your details

Forgot password? Click here to reset