Robust Hierarchical-Optimization RLS Against Sparse Outliers

10/11/2019
by   Konstantinos Slavakis, et al.
0

This paper fortifies the recently introduced hierarchical-optimization recursive least squares (HO-RLS) against outliers which contaminate infrequently linear-regression models. Outliers are modeled as nuisance variables and are estimated together with the linear filter/system variables via a sparsity-inducing (non-)convexly regularized least-squares task. The proposed outlier-robust HO-RLS builds on steepest-descent directions with a constant step size (learning rate), needs no matrix inversion (lemma), accommodates colored nominal noise of known correlation matrix, exhibits small computational footprint, and offers theoretical guarantees, in a probabilistic sense, for the convergence of the system estimates to the solutions of a hierarchical-optimization problem: Minimize a convex loss, which models a-priori knowledge about the unknown system, over the minimizers of the classical ensemble LS loss. Extensive numerical tests on synthetically generated data in both stationary and non-stationary scenarios showcase notable improvements of the proposed scheme over state-of-the-art techniques.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

10/02/2011

Robust artificial neural networks and outlier detection. Technical report

Large outliers break down linear and nonlinear regression models. Robust...
09/19/2018

Noise Statistics Oblivious GARD For Robust Regression With Sparse Outliers

Linear regression models contaminated by Gaussian noise (inlier) and pos...
09/19/2019

Weighted Linear Bandits for Non-Stationary Environments

We consider a stochastic linear bandit model in which the available acti...
01/15/2020

Bridging Convex and Nonconvex Optimization in Robust PCA: Noise, Outliers, and Missing Data

This paper delivers improved theoretical guarantees for the convex progr...
06/07/2017

Outlier Detection Using Distributionally Robust Optimization under the Wasserstein Metric

We present a Distributionally Robust Optimization (DRO) approach to outl...
05/16/2015

A New Perspective on Boosting in Linear Regression via Subgradient Optimization and Relatives

In this paper we analyze boosting algorithms in linear regression from a...
08/28/2018

Making ordinary least squares linear classfiers more robust

In the field of statistics and machine learning, the sums-of-squares, co...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.