Multi-head Uncertainty Inference for Adversarial Attack Detection

12/20/2022
by   Yuqi Yang, et al.
0

Deep neural networks (DNNs) are sensitive and susceptible to tiny perturbation by adversarial attacks which causes erroneous predictions. Various methods, including adversarial defense and uncertainty inference (UI), have been developed in recent years to overcome the adversarial attacks. In this paper, we propose a multi-head uncertainty inference (MH-UI) framework for detecting adversarial attack examples. We adopt a multi-head architecture with multiple prediction heads (i.e., classifiers) to obtain predictions from different depths in the DNNs and introduce shallow information for the UI. Using independent heads at different depths, the normalized predictions are assumed to follow the same Dirichlet distribution, and we estimate distribution parameter of it by moment matching. Cognitive uncertainty brought by the adversarial attacks will be reflected and amplified on the distribution. Experimental results show that the proposed MH-UI framework can outperform all the referred UI methods in the adversarial attack detection task with different settings.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/17/2021

Towards Adversarial-Resilient Deep Neural Networks for False Data Injection Attack Detection in Power Grids

False data injection attack (FDIA) is a critical security issue in power...
research
07/20/2020

Evaluating a Simple Retraining Strategy as a Defense Against Adversarial Attacks

Though deep neural networks (DNNs) have shown superiority over other tec...
research
03/02/2018

Protecting JPEG Images Against Adversarial Attacks

As deep neural networks (DNNs) have been integrated into critical system...
research
02/25/2023

Scalable Attribution of Adversarial Attacks via Multi-Task Learning

Deep neural networks (DNNs) can be easily fooled by adversarial attacks ...
research
09/23/2020

Detection of Iterative Adversarial Attacks via Counter Attack

Deep neural networks (DNNs) have proven to be powerful tools for process...
research
05/29/2022

Superclass Adversarial Attack

Adversarial attacks have only focused on changing the predictions of the...
research
10/26/2021

Disrupting Deep Uncertainty Estimation Without Harming Accuracy

Deep neural networks (DNNs) have proven to be powerful predictors and ar...

Please sign up or login with your details

Forgot password? Click here to reset