DeepAI AI Chat
Log In Sign Up

Block-wise Minimization-Majorization algorithm for Huber's criterion: sparse learning and applications

by   Esa Ollila, et al.

Huber's criterion can be used for robust joint estimation of regression and scale parameters in the linear model. Huber's (Huber, 1981) motivation for introducing the criterion stemmed from non-convexity of the joint maximum likelihood objective function as well as non-robustness (unbounded influence function) of the associated ML-estimate of scale. In this paper, we illustrate how the original algorithm proposed by Huber can be set within the block-wise minimization majorization framework. In addition, we propose novel data-adaptive step sizes for both the location and scale, which are further improving the convergence. We then illustrate how Huber's criterion can be used for sparse learning of underdetermined linear model using the iterative hard thresholding approach. We illustrate the usefulness of the algorithms in an image denoising application and simulation studies.


page 1

page 2

page 3

page 4


A Novel Algorithm for Clustering of Data on the Unit Sphere via Mixture Models

A new maximum approximate likelihood (ML) estimation algorithm for the m...

The Equivariance Criterion in a Linear Model for Fixed-X Cases

In this article, we explored the usage of the equivariance criterion in ...

A User-Friendly Computational Framework for Robust Structured Regression Using the L_2 Criterion

We introduce a user-friendly computational framework for implementing ro...

Multichannel sparse recovery of complex-valued signals using Huber's criterion

In this paper, we generalize Huber's criterion to multichannel sparse re...

Mutual Influence Regression Model

In this article, we propose the mutual influence regression model (MIR) ...

Robust Blind Source Separation by Soft Decision-Directed Non-Unitary Joint Diagonalization

Approximate joint diagonalization of a set of matrices provides a powerf...