Distributed Global Optimization by Annealing

07/20/2019
by   Brian Swenson, et al.
0

The paper considers a distributed algorithm for global minimization of a nonconvex function. The algorithm is a first-order consensus + innovations type algorithm that incorporates decaying additive Gaussian noise for annealing, converging to the set of global minima under certain technical assumptions. The paper presents simple methods for verifying that the required technical assumptions hold and illustrates it with a distributed target-localization application.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/18/2019

Annealing for Distributed Global Optimization

The paper proves convergence to global optima for a class of distributed...
research
10/21/2019

On Distributed Stochastic Gradient Algorithms for Global Optimization

The paper considers the problem of network-based computation of global m...
research
10/01/2021

STRONG: Synchronous and asynchronous RObust Network localization, under Non-Gaussian noise

Real-world network applications must cope with failing nodes, malicious ...
research
11/07/2017

Convex Optimization with Nonconvex Oracles

In machine learning and optimization, one often wants to minimize a conv...
research
03/07/2022

On observability and optimal gain design for distributed linear filtering and prediction

This paper presents a new approach to distributed linear filtering and p...
research
03/21/2022

Training Quantised Neural Networks with STE Variants: the Additive Noise Annealing Algorithm

Training quantised neural networks (QNNs) is a non-differentiable optimi...
research
10/27/2014

A Greedy Homotopy Method for Regression with Nonconvex Constraints

Constrained least squares regression is an essential tool for high-dimen...

Please sign up or login with your details

Forgot password? Click here to reset