DeepAI AI Chat
Log In Sign Up

Recursive Decomposition for Nonconvex Optimization

11/08/2016
by   Abram L. Friesen, et al.
University of Washington
0

Continuous optimization is an important problem in many areas of AI, including vision, robotics, probabilistic inference, and machine learning. Unfortunately, most real-world optimization problems are nonconvex, causing standard convex techniques to find only local optima, even with extensions like random restarts and simulated annealing. We observe that, in many cases, the local modes of the objective function have combinatorial structure, and thus ideas from combinatorial optimization can be brought to bear. Based on this, we propose a problem-decomposition approach to nonconvex optimization. Similarly to DPLL-style SAT solvers and recursive conditioning in probabilistic inference, our algorithm, RDIS, recursively sets variables so as to simplify and decompose the objective function into approximately independent sub-functions, until the remaining functions are simple enough to be optimized by standard techniques like gradient descent. The variables to set are chosen by graph partitioning, ensuring decomposition whenever possible. We show analytically that RDIS can solve a broad class of nonconvex optimization problems exponentially faster than gradient descent with random restarts. Experimentally, RDIS outperforms standard techniques on problems like structure from motion and protein folding.

READ FULL TEXT

page 1

page 2

page 3

page 4

09/16/2019

Gumbel-softmax Optimization: A Simple General Framework for Combinatorial Optimization Problems on Graphs

Many problems in real life can be converted to combinatorial optimizatio...
06/29/2020

Optimization Landscape of Tucker Decomposition

Tucker decomposition is a popular technique for many data analysis and m...
04/14/2020

Gumbel-softmax-based Optimization: A Simple General Framework for Optimization Problems on Graphs

In computer science, there exist a large number of optimization problems...
09/22/2022

Nonsmooth Composite Nonconvex-Concave Minimax Optimization

Nonconvex-concave minimax optimization has received intense interest in ...
10/29/2020

A Single-Loop Smoothed Gradient Descent-Ascent Algorithm for Nonconvex-Concave Min-Max Problems

Nonconvex-concave min-max problem arises in many machine learning applic...
10/22/2022

SurCo: Learning Linear Surrogates For Combinatorial Nonlinear Optimization Problems

Optimization problems with expensive nonlinear cost functions and combin...
11/12/2019

Multi-Objectivizing Sum-of-the-Parts Combinatorial Optimization Problems by Random Objective Decomposition

Multi-objectivization is a term used to describe strategies developed fo...