Characterization of the equivalence of robustification and regularization in linear and matrix regression

11/22/2014
by   Dimitris Bertsimas, et al.
0

The notion of developing statistical methods in machine learning which are robust to adversarial perturbations in the underlying data has been the subject of increasing interest in recent years. A common feature of this work is that the adversarial robustification often corresponds exactly to regularization methods which appear as a loss function plus a penalty. In this paper we deepen and extend the understanding of the connection between robustification and regularization (as achieved by penalization) in regression problems. Specifically, (a) in the context of linear regression, we characterize precisely under which conditions on the model of uncertainty used and on the loss function penalties robustification and regularization are equivalent, and (b) we extend the characterization of robustification and regularization to matrix regression problems (matrix completion and Principal Component Analysis).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/28/2016

Sparse principal component regression for generalized linear models

Principal component regression (PCR) is a widely used two-stage procedur...
research
10/22/2022

Deep Linear Networks for Matrix Completion – An Infinite Depth Limit

The deep linear network (DLN) is a model for implicit regularization in ...
research
04/22/2016

Robust and Sparse Regression via γ-divergence

In high-dimensional data, many sparse regression methods have been propo...
research
07/31/2020

On regularization methods based on Rényi's pseudodistances for sparse high-dimensional linear regression models

Several regularization methods have been considered over the last decade...
research
11/07/2022

Asymptotics of the Sketched Pseudoinverse

We take a random matrix theory approach to random sketching and show an ...
research
04/28/2019

Support Vector Regression via a Combined Reward Cum Penalty Loss Function

In this paper, we introduce a novel combined reward cum penalty loss fun...
research
02/06/2021

Linear Matrix Inequality Approaches to Koopman Operator Approximation

The regression problem associated with finding a matrix approximation of...

Please sign up or login with your details

Forgot password? Click here to reset