Global Error Bounds and Linear Convergence for Gradient-Based Algorithms for Trend Filtering and ℓ_1-Convex Clustering

04/16/2019
by   Nhat Ho, et al.
12

We propose a class of first-order gradient-type optimization algorithms to solve structured filtering-clustering problems, a class of problems which include trend filtering and ℓ_1-convex clustering as special cases. Our first main result establishes the linear convergence of deterministic gradient-type algorithms despite the extreme ill-conditioning of the difference operator matrices in these problems. This convergence result is based on a convex-concave saddle point formulation of filtering-clustering problems and the fact that the dual form of the problem admits a global error bound, a result which is based on the celebrated Hoffman bound for the distance between a point and its projection onto an optimal set. The linear convergence rate also holds for stochastic variance reduction gradient-type algorithms. Finally, we present empirical results to show that the algorithms that we analyze perform comparable to state-of-the-art algorithms for trend filtering, while presenting advantages for scalability.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/16/2019

On Structured Filtering-Clustering: Global Error Bound and Optimal First-Order Algorithms

In recent years, the filtering-clustering problems have been a central t...
research
08/08/2018

On the Convergence of A Class of Adam-Type Algorithms for Non-Convex Optimization

This paper studies a class of adaptive gradient based momentum algorithm...
research
01/20/2013

A Linearly Convergent Conditional Gradient Algorithm with Applications to Online and Stochastic Optimization

Linear optimization is many times algorithmically simpler than non-linea...
research
05/25/2020

Triangularized Orthogonalization-free Method for Solving Extreme Eigenvalue Problems

A novel orthogonalization-free method together with two specific algorit...
research
06/25/2019

Expected Sarsa(λ) with Control Variate for Variance Reduction

Off-policy learning is powerful for reinforcement learning. However, the...
research
10/19/2012

On the Convergence of Bound Optimization Algorithms

Many practitioners who use the EM algorithm complain that it is sometime...
research
09/28/2020

Gradient based block coordinate descent algorithms for joint approximate diagonalization of matrices

In this paper, we propose a gradient based block coordinate descent (BCD...

Please sign up or login with your details

Forgot password? Click here to reset