Classifier Robustness Enhancement Via Test-Time Transformation

03/27/2023
by   Tsachi Blau, et al.
3

It has been recently discovered that adversarially trained classifiers exhibit an intriguing property, referred to as perceptually aligned gradients (PAG). PAG implies that the gradients of such classifiers possess a meaningful structure, aligned with human perception. Adversarial training is currently the best-known way to achieve classification robustness under adversarial attacks. The PAG property, however, has yet to be leveraged for further improving classifier robustness. In this work, we introduce Classifier Robustness Enhancement Via Test-Time Transformation (TETRA) – a novel defense method that utilizes PAG, enhancing the performance of trained robust classifiers. Our method operates in two phases. First, it modifies the input image via a designated targeted adversarial attack into each of the dataset's classes. Then, it classifies the input image based on the distance to each of the modified instances, with the assumption that the shortest distance relates to the true class. We show that the proposed method achieves state-of-the-art results and validate our claim through extensive experiments on a variety of defense methods, classifier architectures, and datasets. We also empirically demonstrate that TETRA can boost the accuracy of any differentiable adversarial training classifier across a variety of attacks, including ones unseen at training. Specifically, applying TETRA leads to substantial improvement of up to +23%, +20%, and +26% on CIFAR10, CIFAR100, and ImageNet, respectively.

READ FULL TEXT

page 2

page 3

page 6

page 8

research
05/16/2022

Diffusion Models for Adversarial Purification

Adversarial purification refers to a class of defense methods that remov...
research
07/29/2021

Enhancing Adversarial Robustness via Test-time Transformation Ensembling

Deep learning models are prone to being fooled by imperceptible perturba...
research
08/18/2022

Shadows Aren't So Dangerous After All: A Fast and Robust Defense Against Shadow-Based Adversarial Attacks

Robust classification is essential in tasks like autonomous vehicle sign...
research
06/18/2020

Dissecting Deep Networks into an Ensemble of Generative Classifiers for Robust Predictions

Deep Neural Networks (DNNs) are often criticized for being susceptible t...
research
10/18/2019

Are Perceptually-Aligned Gradients a General Property of Robust Classifiers?

For a standard convolutional neural network, optimizing over the input p...
research
07/22/2022

Do Perceptually Aligned Gradients Imply Adversarial Robustness?

In the past decade, deep learning-based networks have achieved unprecede...
research
05/24/2021

Learning Security Classifiers with Verified Global Robustness Properties

Recent works have proposed methods to train classifiers with local robus...

Please sign up or login with your details

Forgot password? Click here to reset