Adversarial Defenses via Vector Quantization

05/23/2023
by   Zhiyi Dong, et al.
0

Building upon Randomized Discretization, we develop two novel adversarial defenses against white-box PGD attacks, utilizing vector quantization in higher dimensional spaces. These methods, termed pRD and swRD, not only offer a theoretical guarantee in terms of certified accuracy, they are also shown, via abundant experiments, to perform comparably or even superior to the current art of adversarial defenses. These methods can be extended to a version that allows further training of the target classifier and demonstrates further improved performance.

READ FULL TEXT

page 2

page 9

page 10

research
04/10/2018

On the Robustness of the CVPR 2018 White-Box Adversarial Example Defenses

Neural networks are known to be vulnerable to adversarial examples. In t...
research
03/25/2019

Defending against Whitebox Adversarial Attacks via Randomized Discretization

Adversarial perturbations dramatically decrease the accuracy of state-of...
research
08/30/2020

Benchmarking adversarial attacks and defenses for time-series data

The adversarial vulnerability of deep networks has spurred the interest ...
research
01/02/2020

Ensembles of Many Diverse Weak Defenses can be Strong: Defending Deep Neural Networks Against Adversarial Attacks

Despite achieving state-of-the-art performance across many domains, mach...
research
08/20/2019

Protecting Neural Networks with Hierarchical Random Switching: Towards Better Robustness-Accuracy Trade-off for Stochastic Defenses

Despite achieving remarkable success in various domains, recent studies ...
research
02/17/2023

Measuring Equality in Machine Learning Security Defenses

The machine learning security community has developed myriad defenses fo...

Please sign up or login with your details

Forgot password? Click here to reset