Adversarial Robustness is at Odds with Lazy Training

06/18/2022
by   Yunjuan Wang, et al.
0

Recent works show that random neural networks are vulnerable against adversarial attacks [Daniely and Schacham, 2020] and that such attacks can be easily found using a single step of gradient descent [Bubeck et al., 2021]. In this work, we take it one step further and show that a single gradient step can find adversarial examples for networks trained in the so-called lazy regime. This regime is interesting because even though the neural network weights remain close to the initialization, there exist networks with small generalization error, which can be found efficiently using first-order methods. Our work challenges the model of the lazy regime, the dominant regime in which neural networks are provably efficiently learnable. We show that the networks trained in this regime, even though they enjoy good theoretical computational guarantees, remain vulnerable to adversarial examples. To the best of our knowledge, this is the first work to prove that such well-generalizable neural networks are still vulnerable to adversarial attacks.

READ FULL TEXT
research
02/19/2018

Robustness of Rotation-Equivariant Networks to Adversarial Perturbations

Deep neural networks have been shown to be vulnerable to adversarial exa...
research
11/14/2021

Generating Band-Limited Adversarial Surfaces Using Neural Networks

Generating adversarial examples is the art of creating a noise that is a...
research
09/25/2018

Neural Networks with Structural Resistance to Adversarial Attacks

In adversarial attacks to machine-learning classifiers, small perturbati...
research
03/18/2021

Explainable Adversarial Attacks in Deep Neural Networks Using Activation Profiles

As neural networks become the tool of choice to solve an increasing vari...
research
04/20/2018

ADef: an Iterative Algorithm to Construct Adversarial Deformations

While deep neural networks have proven to be a powerful tool for many re...
research
12/02/2019

Fastened CROWN: Tightened Neural Network Robustness Certificates

The rapid growth of deep learning applications in real life is accompani...
research
06/08/2022

Identifying good directions to escape the NTK regime and efficiently learn low-degree plus sparse polynomials

A recent goal in the theory of deep learning is to identify how neural n...

Please sign up or login with your details

Forgot password? Click here to reset