IHT dies hard: Provable accelerated Iterative Hard Thresholding

12/26/2017
by   Rajiv Khanna, et al.
0

We study --both in theory and practice-- the use of momentum motions in classic iterative hard thresholding (IHT) methods. By simply modifying plain IHT, we investigate its convergence behavior on convex optimization criteria with non-convex constraints, under standard assumptions. In diverse scenaria, we observe that acceleration in IHT leads to significant improvements, compared to state of the art projected gradient descent and Frank-Wolfe variants. As a byproduct of our inspection, we study the impact of selecting the momentum parameter: similar to convex settings, two modes of behavior are observed --"rippling" and linear-- depending on the level of momentum.

READ FULL TEXT
research
06/16/2021

Momentum-inspired Low-Rank Coordinate Descent for Diagonally Constrained SDPs

We present a novel, practical, and provable approach for solving diagona...
research
04/12/2016

Unified Convergence Analysis of Stochastic Momentum Methods for Convex and Non-convex Optimization

Recently, stochastic momentum methods have been widely adopted in train...
research
10/29/2019

Learning Sparse Distributions using Iterative Hard Thresholding

Iterative hard thresholding (IHT) is a projected gradient descent algori...
research
10/08/2016

Iterative proportional scaling revisited: a modern optimization perspective

This paper revisits the classic iterative proportional scaling (IPS) fro...
research
02/03/2020

Complexity Guarantees for Polyak Steps with Momentum

In smooth strongly convex optimization, or in the presence of Hölderian ...
research
06/23/2021

Understanding Modern Techniques in Optimization: Frank-Wolfe, Nesterov's Momentum, and Polyak's Momentum

In the first part of this dissertation research, we develop a modular fr...

Please sign up or login with your details

Forgot password? Click here to reset