Linking average- and worst-case perturbation robustness via class selectivity and dimensionality

10/14/2020
by   Matthew L. Leavitt, et al.
4

Representational sparsity is known to affect robustness to input perturbations in deep neural networks (DNNs), but less is known about how the semantic content of representations affects robustness. Class selectivity-the variability of a unit's responses across data classes or dimensions-is one way of quantifying the sparsity of semantic representations. Given recent evidence that class selectivity may not be necessary for, and can even impair generalization, we investigated whether it also confers robustness (or vulnerability) to perturbations of input data. We found that class selectivity leads to increased vulnerability to average-case (naturalistic) perturbations in ResNet18 and ResNet20, as measured using Tiny ImageNetC and CIFAR10C, respectively. Networks regularized to have lower levels of class selectivity are more robust to average-case perturbations, while networks with higher class selectivity are more vulnerable. In contrast, we found that class selectivity increases robustness to worst-case (i.e. white box adversarial) perturbations, suggesting that while decreasing class selectivity is helpful for average-case robustness, it is harmful for worst-case robustness. To explain this difference, we studied the dimensionality of the networks' representations: we found that the dimensionality of early-layer representations is inversely proportional to a network's class selectivity, and that adversarial samples cause a larger increase in early-layer dimensionality than corrupted samples. We also found that the input-unit gradient was more variable across samples and units in high-selectivity networks compared to low-selectivity networks. These results lead to the conclusion that units participate more consistently in low-selectivity regimes compared to high-selectivity regimes, effectively creating a larger attack surface and hence vulnerability to worst-case perturbations.

READ FULL TEXT

page 15

page 21

research
07/08/2020

On the relationship between class selectivity, dimensionality, and robustness

While the relative trade-offs between sparse and distributed representat...
research
10/08/2019

SmoothFool: An Efficient Framework for Computing Smooth Adversarial Perturbations

Deep neural networks are susceptible to adversarial manipulations in the...
research
12/04/2021

Generalized Likelihood Ratio Test for Adversarially Robust Hypothesis Testing

Machine learning models are known to be susceptible to adversarial attac...
research
06/10/2020

Exploring the Vulnerability of Deep Neural Networks: A Study of Parameter Corruption

We argue that the vulnerability of model parameters is of crucial value ...
research
11/16/2020

Adversarially Robust Classification based on GLRT

Machine learning models are vulnerable to adversarial attacks that can o...
research
08/31/2016

Robustness of classifiers: from adversarial to random noise

Several recent works have shown that state-of-the-art classifiers are vu...
research
01/29/2023

Towards Verifying the Geometric Robustness of Large-scale Neural Networks

Deep neural networks (DNNs) are known to be vulnerable to adversarial ge...

Please sign up or login with your details

Forgot password? Click here to reset