
Optimization over Nonnegative and Convex Polynomials With and Without Semidefinite Programming
The problem of optimizing over the cone of nonnegative polynomials is a ...
read it

Response to "Counterexample to global convergence of DSOS and SDSOS hierarchies"
In a recent note [8], the author provides a counterexample to the global...
read it

Geometry of 3D Environments and Sum of Squares Polynomials
Motivated by applications in robotics and computer vision, we study prob...
read it

A Survey of Recent Scalability Improvements for Semidefinite Programming with Applications in Machine Learning, Control, and Robotics
Historically, scalability has been a major challenge to the successful a...
read it

ReachAvoid Problems via SumofSquares Optimization and Dynamic Programming
Reachavoid problems involve driving a system to a set of desirable conf...
read it

Engineering and Business Applications of Sum of Squares Polynomials
Optimizing over the cone of nonnegative polynomials, and its dual counte...
read it

Definable Ellipsoid Method, SumsofSquares Proofs, and the Isomorphism Problem
The ellipsoid method is an algorithm that solves the (weak) feasibility ...
read it
DSOS and SDSOS Optimization: More Tractable Alternatives to Sum of Squares and Semidefinite Optimization
In recent years, optimization theory has been greatly impacted by the advent of sum of squares (SOS) optimization. The reliance of this technique on largescale semidefinite programs however, has limited the scale of problems to which it can be applied. In this paper, we introduce DSOS and SDSOS optimization as more tractable alternatives to sum of squares optimization that rely instead on linear and second order cone programs respectively. These are optimization problems over certain subsets of sum of squares polynomials (or equivalently subsets of positive semidefinite matrices), which can be of interest in general applications of semidefinite programming where scalability is a limitation. We show that some basic theorems from SOS optimization which rely on results from real algebraic geometry are still valid for DSOS and SDSOS optimization. Furthermore, we show with numerical experiments from diverse application areaspolynomial optimization, statistics and machine learning, derivative pricing, and control theorythat with reasonable tradeoffs in accuracy, we can handle problems at scales that are currently far beyond the reach of sum of squares approaches. Finally, we provide a review of recent techniques that bridge the gap between our DSOS/SDSOS approach and the SOS approach at the expense of additional running time. The appendix of the paper introduces an accompanying MATLAB package for DSOS and SDSOS optimization.
READ FULL TEXT
Comments
There are no comments yet.