Learning Model-Based Sparsity via Projected Gradient Descent

09/07/2012
by   Sohail Bahmani, et al.
0

Several convex formulation methods have been proposed previously for statistical estimation with structured sparsity as the prior. These methods often require a carefully tuned regularization parameter, often a cumbersome or heuristic exercise. Furthermore, the estimate that these methods produce might not belong to the desired sparsity model, albeit accurately approximating the true parameter. Therefore, greedy-type algorithms could often be more desirable in estimating structured-sparse parameters. So far, these greedy methods have mostly focused on linear statistical models. In this paper we study the projected gradient descent with non-convex structured-sparse parameter model as the constraint set. Should the cost function have a Stable Model-Restricted Hessian the algorithm produces an approximation for the desired minimizer. As an example we elaborate on application of the main results to estimation in Generalized Linear Model.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/31/2020

Tree-Projected Gradient Descent for Estimating Gradient-Sparse Parameters on Graphs

We study estimation of a gradient-sparse parameter vector θ^* ∈ℝ^p, havi...
research
07/22/2021

Structured second-order methods via natural gradient descent

In this paper, we propose new structured second-order methods and struct...
research
03/02/2021

Structural Sparsity in Multiple Measurements

We propose a novel sparsity model for distributed compressed sensing in ...
research
03/25/2012

Greedy Sparsity-Constrained Optimization

Sparsity-constrained optimization has wide applicability in machine lear...
research
10/23/2016

Fast and Reliable Parameter Estimation from Nonlinear Observations

In this paper we study the problem of recovering a structured but unknow...
research
01/27/2022

Benchmarking learned non-Cartesian k-space trajectories and reconstruction networks

We benchmark the current existing methods to jointly learn non-Cartesian...
research
10/29/2019

Learning Sparse Distributions using Iterative Hard Thresholding

Iterative hard thresholding (IHT) is a projected gradient descent algori...

Please sign up or login with your details

Forgot password? Click here to reset