DeepAI AI Chat
Log In Sign Up

Overparameterized Linear Regression under Adversarial Attacks

by   Antônio H. Ribeiro, et al.

As machine learning models start to be used in critical applications, their vulnerabilities and brittleness become a pressing concern. Adversarial attacks are a popular framework for studying these vulnerabilities. In this work, we study the error of linear regression in the face of adversarial attacks. We provide bounds of the error in terms of the traditional risk and the parameter norm and show how these bounds can be leveraged and make it possible to use analysis from non-adversarial setups to study the adversarial risk. The usefulness of these results is illustrated by shedding light on whether or not overparameterized linear models can be adversarially robust. We show that adding features to linear models might be either a source of additional robustness or brittleness. We show that these differences appear due to scaling and how the ℓ_1 and ℓ_2 norms of random projections concentrate. We also show how the reformulation we propose allows for solving adversarial training as a convex optimization problem. This is then used as a tool to study how adversarial training and other regularization methods might affect the robustness of the estimated models.


Surprises in adversarially-trained linear regression

State-of-the-art machine learning models can be vulnerable to very small...

Robust Linear Regression: Gradient-descent, Early-stopping, and Beyond

In this work we study the robustness to adversarial attacks, of early-st...

Generalization Bounds for Adversarial Contrastive Learning

Deep networks are well-known to be fragile to adversarial attacks, and a...

Rademacher Complexity for Adversarially Robust Generalization

Many machine learning models are vulnerable to adversarial attacks. It h...

Multi-Agent Adversarial Training Using Diffusion Learning

This work focuses on adversarial learning over graphs. We propose a gene...

Decentralized Adversarial Training over Graphs

The vulnerability of machine learning models to adversarial attacks has ...

A Modified Drake Equation for Assessing Adversarial Risk to Machine Learning Models

Each machine learning model deployed into production has a risk of adver...