Beyond the Universal Law of Robustness: Sharper Laws for Random Features and Neural Tangent Kernels

02/03/2023
by   Simone Bombari, et al.
0

Machine learning models are vulnerable to adversarial perturbations, and a thought-provoking paper by Bubeck and Sellke has analyzed this phenomenon through the lens of over-parameterization: interpolating smoothly the data requires significantly more parameters than simply memorizing it. However, this "universal" law provides only a necessary condition for robustness, and it is unable to discriminate between models. In this paper, we address these gaps by focusing on empirical risk minimization in two prototypical settings, namely, random features and the neural tangent kernel (NTK). We prove that, for random features, the model is not robust for any degree of over-parameterization, even when the necessary condition coming from the universal law of robustness is satisfied. In contrast, for even activations, the NTK model meets the universal lower bound, and it is robust as soon as the necessary condition on over-parameterization is fulfilled. This also addresses a conjecture in prior work by Bubeck, Li and Nagaraj. Our analysis decouples the effect of the kernel of the model from an "interaction matrix", which describes the interaction with the test data and captures the effect of the activation. Our theoretical results are corroborated by numerical evidence on both synthetic and standard datasets (MNIST, CIFAR-10).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/26/2021

A Universal Law of Robustness via Isoperimetry

Classically, data interpolation with a parametrized model class is possi...
research
06/22/2022

Robust Universal Adversarial Perturbations

Universal Adversarial Perturbations (UAPs) are imperceptible, image-agno...
research
11/15/2018

A Spectral View of Adversarially Robust Features

Given the apparent difficulty of learning models that are robust to adve...
research
04/21/2021

Jacobian Regularization for Mitigating Universal Adversarial Perturbations

Universal Adversarial Perturbations (UAPs) are input perturbations that ...
research
06/13/2020

A New Algorithm for Tessellated Kernel Learning

The accuracy and complexity of machine learning algorithms based on kern...
research
02/18/2021

On Connectivity of Solutions in Deep Learning: The Role of Over-parameterization and Feature Quality

It has been empirically observed that, in deep neural networks, the solu...

Please sign up or login with your details

Forgot password? Click here to reset