Boosting with Structural Sparsity: A Differential Inclusion Approach

04/16/2017
by   Chendi Huang, et al.
0

Boosting as gradient descent algorithms is one popular method in machine learning. In this paper a novel Boosting-type algorithm is proposed based on restricted gradient descent with structural sparsity control whose underlying dynamics are governed by differential inclusions. In particular, we present an iterative regularization path with structural sparsity where the parameter is sparse under some linear transforms, based on variable splitting and the Linearized Bregman Iteration. Hence it is called Split LBI. Despite its simplicity, Split LBI outperforms the popular generalized Lasso in both theory and experiments. A theory of path consistency is presented that equipped with a proper early stopping, Split LBI may achieve model selection consistency under a family of Irrepresentable Conditions which can be weaker than the necessary and sufficient condition for generalized Lasso. Furthermore, some ℓ_2 error bounds are also given at the minimax optimal rates. The utility and benefit of the algorithm are illustrated by several applications including image denoising, partial order ranking of sport teams, and world university grouping with crowdsourced ranking data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/08/2018

A Unified Dynamic Approach to Sparse Model Selection

Sparse model selection is ubiquitous from linear regression to graphical...
research
04/24/2019

S^2-LBI: Stochastic Split Linearized Bregman Iterations for Parsimonious Deep Learning

This paper proposes a novel Stochastic Split Linearized Bregman Iteratio...
research
03/30/2021

Controlling the False Discovery Rate in Structural Sparsity: Split Knockoffs

Controlling the False Discovery Rate (FDR) in a variable selection proce...
research
05/08/2021

Nearly Minimax-Optimal Rates for Noisy Sparse Phase Retrieval via Early-Stopped Mirror Descent

This paper studies early-stopped mirror descent applied to noisy sparse ...
research
07/04/2020

DessiLBI: Exploring Structural Sparsity of Deep Networks via Differential Inclusion Paths

Over-parameterization is ubiquitous nowadays in training neural networks...
research
10/14/2022

Early stopping for L^2-boosting in high-dimensional linear models

Increasingly high-dimensional data sets require that estimation methods ...
research
06/30/2014

Sparse Recovery via Differential Inclusions

In this paper, we recover sparse signals from their noisy linear measure...

Please sign up or login with your details

Forgot password? Click here to reset