Robust Linear Regression: Gradient-descent, Early-stopping, and Beyond

01/31/2023
by   Meyer Scetbon, et al.
0

In this work we study the robustness to adversarial attacks, of early-stopping strategies on gradient-descent (GD) methods for linear regression. More precisely, we show that early-stopped GD is optimally robust (up to an absolute constant) against Euclidean-norm adversarial attacks. However, we show that this strategy can be arbitrarily sub-optimal in the case of general Mahalanobis attacks. This observation is compatible with recent findings in the case of classification <cit.> that show that GD provably converges to non-robust models. To alleviate this issue, we propose to apply instead a GD scheme on a transformation of the data adapted to the attack. This data transformation amounts to apply feature-depending learning rates and we show that this modified GD is able to handle any Mahalanobis attack, as well as more general attacks under some conditions. Unfortunately, choosing such adapted transformations can be hard for general attacks. To the rescue, we design a simple and tractable estimator whose adversarial risk is optimal up to within a multiplicative constant of 1.1124 in the population regime, and works for any norm.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/13/2022

Overparameterized Linear Regression under Adversarial Attacks

As machine learning models start to be used in critical applications, th...
research
08/12/2021

Implicit Sparse Regularization: The Impact of Depth and Early Stopping

In this paper, we study the implicit bias of gradient descent for sparse...
research
11/29/2021

Being Patient and Persistent: Optimizing An Early Stopping Strategy for Deep Learning in Profiled Attacks

The absence of an algorithm that effectively monitors deep learning mode...
research
06/03/2021

PDPGD: Primal-Dual Proximal Gradient Descent Adversarial Attack

State-of-the-art deep neural networks are sensitive to small input pertu...
research
04/06/2017

Adequacy of the Gradient-Descent Method for Classifier Evasion Attacks

Despite the wide use of machine learning in adversarial settings includi...
research
08/01/2023

Robust Linear Regression: Phase-Transitions and Precise Tradeoffs for General Norms

In this paper, we investigate the impact of test-time adversarial attack...
research
06/10/2021

Early-stopped neural networks are consistent

This work studies the behavior of neural networks trained with the logis...

Please sign up or login with your details

Forgot password? Click here to reset