Are Generative Classifiers More Robust to Adversarial Attacks?

02/19/2018
by   Yingzhen Li, et al.
0

There is a rising interest in studying the robustness of deep neural network classifiers against adversaries, with both advanced attack and defence techniques being actively developed. However, most recent work focuses on discriminative classifiers which only models the conditional distribution of the labels given the inputs. In this abstract we propose deep Bayes classifier that improves the classical naive Bayes with conditional deep generative models, and verifies its robustness against a number of existing attacks. We further developed a detection method for adversarial examples based on conditional deep generative models. Our initial results on MNIST suggest that deep Bayes classifiers might be more robust when compared with deep discriminative classifiers, and the proposed detection method achieves high detection rates against two commonly used attacks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset