DeepAI AI Chat
Log In Sign Up

Improving the Accuracy-Robustness Trade-off of Classifiers via Adaptive Smoothing

by   Yatong Bai, et al.

While it is shown in the literature that simultaneously accurate and robust classifiers exist for common datasets, previous methods that improve the adversarial robustness of classifiers often manifest an accuracy-robustness trade-off. We build upon recent advancements in data-driven “locally biased smoothing” to develop classifiers that treat benign and adversarial test data differently. Specifically, we tailor the smoothing operation to the usage of a robust neural network as the source of robustness. We then extend the smoothing procedure to the multi-class setting and adapt an adversarial input detector into a policy network. The policy adaptively adjusts the mixture of the robust base classifier and a standard network, where the standard network is optimized for clean accuracy and is not robust in general. We provide theoretical analyses to motivate the use of the adaptive smoothing procedure, certify the robustness of the smoothed classifier under realistic assumptions, and justify the introduction of the policy network. We use various attack methods, including AutoAttack and adaptive attack, to empirically verify that the smoothed model noticeably improves the accuracy-robustness trade-off. On the CIFAR-100 dataset, our method simultaneously achieves an 80.09% clean accuracy and a 32.94% AutoAttacked accuracy. The code that implements adaptive smoothing is available at


page 1

page 2

page 3

page 4


Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers

Recent works have shown the effectiveness of randomized smoothing as a s...

Consistency Regularization for Certified Robustness of Smoothed Classifiers

A recent technique of randomized smoothing has shown that the worst-case...

Proper measure for adversarial robustness

This paper analyzes the problems of standard adversarial accuracy and st...

Confidence-aware Training of Smoothed Classifiers for Certified Robustness

Any classifier can be "smoothed out" under Gaussian noise to build a new...

Intriguing Properties of Input-dependent Randomized Smoothing

Randomized smoothing is currently considered the state-of-the-art method...

Label Smoothing and Adversarial Robustness

Recent studies indicate that current adversarial attack methods are flaw...

CUDA: Convolution-based Unlearnable Datasets

Large-scale training of modern deep learning models heavily relies on pu...