Large-scale global optimization of ultra-high dimensional non-convex landscapes based on generative neural networks

07/09/2023
by   Jiaqi Jiang, et al.
0

We present a non-convex optimization algorithm metaheuristic, based on the training of a deep generative network, which enables effective searching within continuous, ultra-high dimensional landscapes. During network training, populations of sampled local gradients are utilized within a customized loss function to evolve the network output distribution function towards one peak at high-performing optima. The deep network architecture is tailored to support progressive growth over the course of training, which allows the algorithm to manage the curse of dimensionality characteristic of high-dimensional landscapes. We apply our concept to a range of standard optimization problems with dimensions as high as one thousand and show that our method performs better with fewer function evaluations compared to state-of-the-art algorithm benchmarks. We also discuss the role of deep network over-parameterization, loss function engineering, and proper network architecture selection in optimization, and why the required batch size of sampled local gradients is independent of problem dimension. These concepts form the foundation for a new class of algorithms that utilize customizable and expressive deep generative networks to solve non-convex optimization problems.

READ FULL TEXT
research
06/18/2019

Dataless training of generative models for the inverse design of metasurfaces

Metasurfaces are subwavelength-structured artificial media that can shap...
research
07/02/2015

DC Proximal Newton for Non-Convex Optimization Problems

We introduce a novel algorithm for solving learning problems where both ...
research
06/24/2015

Global Optimality in Tensor Factorization, Deep Learning, and Beyond

Techniques involving factorization are found in a wide range of applicat...
research
11/04/2017

Provable quantum state tomography via non-convex methods

With nowadays steadily growing quantum processors, it is required to dev...
research
05/30/2015

Saddle-free Hessian-free Optimization

Nonconvex optimization problems such as the ones in training deep neural...
research
06/15/2017

Stochastic Training of Neural Networks via Successive Convex Approximations

This paper proposes a new family of algorithms for training neural netwo...

Please sign up or login with your details

Forgot password? Click here to reset