Lower Bounds for Higher-Order Convex Optimization

10/27/2017
by   Naman Agarwal, et al.
0

State-of-the-art methods in convex and non-convex optimization employ higher-order derivative information, either implicitly or explicitly. We explore the limitations of higher-order optimization and prove that even for convex optimization, a polynomial dependence on the approximation guarantee and higher-order smoothness parameters is necessary. As a special case, we show Nesterov's accelerated cubic regularization method to be nearly tight.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/04/2019

Higher-Order Accelerated Methods for Faster Non-Smooth Optimization

We provide improved convergence rates for various non-smooth optimizatio...
research
01/04/2022

Sparse Non-Convex Optimization For Higher Moment Portfolio Management

One of the reasons that higher order moment portfolio optimization metho...
research
08/24/2019

Computing ground states of Bose-Einstein Condensates with higher order interaction via a regularized density function formulation

We propose and analyze a new numerical method for computing the ground s...
research
02/18/2016

Efficient approaches for escaping higher order saddle points in non-convex optimization

Local search heuristics for non-convex optimizations are popular in appl...
research
06/16/2020

Additive Poisson Process: Learning Intensity of Higher-Order Interaction in Stochastic Processes

We present the Additive Poisson Process (APP), a novel framework that ca...
research
06/14/2020

Exploiting Higher Order Smoothness in Derivative-free Optimization and Continuous Bandits

We study the problem of zero-order optimization of a strongly convex fun...
research
04/21/2014

A higher-order MRF based variational model for multiplicative noise reduction

The Fields of Experts (FoE) image prior model, a filter-based higher-ord...

Please sign up or login with your details

Forgot password? Click here to reset