DSOS and SDSOS Optimization: More Tractable Alternatives to Sum of Squares and Semidefinite Optimization
In recent years, optimization theory has been greatly impacted by the advent of sum of squares (SOS) optimization. The reliance of this technique on large-scale semidefinite programs however, has limited the scale of problems to which it can be applied. In this paper, we introduce DSOS and SDSOS optimization as more tractable alternatives to sum of squares optimization that rely instead on linear and second order cone programs respectively. These are optimization problems over certain subsets of sum of squares polynomials (or equivalently subsets of positive semidefinite matrices), which can be of interest in general applications of semidefinite programming where scalability is a limitation. We show that some basic theorems from SOS optimization which rely on results from real algebraic geometry are still valid for DSOS and SDSOS optimization. Furthermore, we show with numerical experiments from diverse application areas---polynomial optimization, statistics and machine learning, derivative pricing, and control theory---that with reasonable tradeoffs in accuracy, we can handle problems at scales that are currently far beyond the reach of sum of squares approaches. Finally, we provide a review of recent techniques that bridge the gap between our DSOS/SDSOS approach and the SOS approach at the expense of additional running time. The appendix of the paper introduces an accompanying MATLAB package for DSOS and SDSOS optimization.
READ FULL TEXT