A Theoretical Study of The Effects of Adversarial Attacks on Sparse Regression

12/21/2022
by   Deepak Maurya, et al.
0

This paper analyzes ℓ_1 regularized linear regression under the challenging scenario of having only adversarially corrupted data for training. We use the primal-dual witness paradigm to provide provable performance guarantees for the support of the estimated regression parameter vector to match the actual parameter. Our theoretical analysis shows the counter-intuitive result that an adversary can influence sample complexity by corrupting the irrelevant features, i.e., those corresponding to zero coefficients of the regression parameter vector, which, consequently, do not affect the dependent variable. As any adversarially robust algorithm has its limitations, our theoretical analysis identifies the regimes under which the learning algorithm and adversary can dominate over each other. It helps us to analyze these fundamental limits and address critical scientific questions of which parameters (like mutual incoherence, the maximum and minimum eigenvalue of the covariance matrix, and the budget of adversarial perturbation) play a role in the high or low probability of success of the LASSO algorithm. Also, the derived sample complexity is logarithmic with respect to the size of the regression parameter vector, and our theoretical claims are validated by empirical analysis on synthetic and real-world datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/02/2022

Sparse Mixed Linear Regression with Guarantees: Taming an Intractable Problem with Invex Relaxation

In this paper, we study the problem of sparse mixed linear regression on...
research
05/12/2022

Sample Complexity Bounds for Robustly Learning Decision Lists against Evasion Attacks

A fundamental problem in adversarial machine learning is to quantify how...
research
11/17/2020

Revisiting the Sample Complexity of Sparse Spectrum Approximation of Gaussian Processes

We introduce a new scalable approximation for Gaussian processes with pr...
research
07/04/2021

A Theoretical Analysis of Fine-tuning with Linear Teachers

Fine-tuning is a common practice in deep learning, achieving excellent g...
research
07/11/2019

Directing Power Towards Subspaces of the Alternative Hypothesis

This paper treats two problems in high-dimensional testing that have rec...
research
01/23/2017

Stable Recovery Of Sparse Vectors From Random Sinusoidal Feature Maps

Random sinusoidal features are a popular approach for speeding up kernel...
research
06/10/2022

Provable Guarantees for Sparsity Recovery with Deterministic Missing Data Patterns

We study the problem of consistently recovering the sparsity pattern of ...

Please sign up or login with your details

Forgot password? Click here to reset