DeepAI
Log In Sign Up

Towards a theory of non-commutative optimization: geodesic first and second order methods for moment maps and polytopes

10/27/2019
by   Peter Bürgisser, et al.
0

This paper initiates a systematic development of a theory of non-commutative optimization. It aims to unify and generalize a growing body of work from the past few years which developed and analyzed algorithms for natural geodesically convex optimization problems on Riemannian manifolds that arise from the symmetries of non-commutative groups. These algorithms minimize the moment map (a non-commutative notion of the usual gradient) and test membership in moment polytopes (a vast class of polytopes, typically of exponential vertex and facet complexity, which arise from this a-priori non-convex, non-linear setting). This setting captures a diverse set of problems in different areas of computer science, mathematics, and physics. Several of them were solved efficiently for the first time using non-commutative methods; the corresponding algorithms also lead to solutions of purely structural problems and to many new connections between disparate fields. In the spirit of standard convex optimization, we develop two general methods in the geodesic setting, a first order and a second order method, which respectively receive first and second order information on the "derivatives" of the function to be optimized. These in particular subsume all past results. The main technical work, again unifying and extending much of the previous literature, goes into identifying the key parameters of the underlying group actions which control convergence to the optimum in each of these methods. These non-commutative analogues of "smoothness" are far more complex and require significant algebraic and analytic machinery. Despite this complexity, the way in which these parameters control convergence in both methods is quite simple and elegant. We show how bound these parameters in several general cases. Our work points to intriguing open problems and suggests further research directions.

READ FULL TEXT

page 1

page 2

page 3

page 4

10/25/2017

Stochastic Non-convex Optimization with Strong High Probability Second-order Convergence

In this paper, we study stochastic non-convex optimization with non-conv...
07/24/2019

Sampling and Optimization on Convex Sets in Riemannian Manifolds of Non-Negative Curvature

The Euclidean space notion of convex sets (and functions) generalizes to...
02/18/2016

Efficient approaches for escaping higher order saddle points in non-convex optimization

Local search heuristics for non-convex optimizations are popular in appl...
02/20/2020

Second-order Conditional Gradients

Constrained second-order convex optimization algorithms are the method o...
07/23/2021

A general sample complexity analysis of vanilla policy gradient

The policy gradient (PG) is one of the most popular methods for solving ...
04/28/2018

A Riemannian Corollary of Helly's Theorem

We introduce a notion of halfspace for Hadamard manifolds that is natura...
12/14/2021

Imaginary Zeroth-Order Optimization

Zeroth-order optimization methods are developed to overcome the practica...