DeepAI AI Chat
Log In Sign Up

Attack-Agnostic Adversarial Detection

06/01/2022
by   Jiaxin Cheng, et al.
USC Information Sciences Institute
0

The growing number of adversarial attacks in recent years gives attackers an advantage over defenders, as defenders must train detectors after knowing the types of attacks, and many models need to be maintained to ensure good performance in detecting any upcoming attacks. We propose a way to end the tug-of-war between attackers and defenders by treating adversarial attack detection as an anomaly detection problem so that the detector is agnostic to the attack. We quantify the statistical deviation caused by adversarial perturbations in two aspects. The Least Significant Component Feature (LSCF) quantifies the deviation of adversarial examples from the statistics of benign samples and Hessian Feature (HF) reflects how adversarial examples distort the landscape of the model's optima by measuring the local loss curvature. Empirical results show that our method can achieve an overall ROC AUC of 94.9 89.7 comparable performance to adversarial detectors trained with adversarial examples on most of the attacks.

READ FULL TEXT
06/14/2020

Adversarial Attacks and Detection on Reinforcement Learning-Based Interactive Recommender Systems

Adversarial attacks pose significant challenges for detecting adversaria...
12/11/2020

Random Projections for Adversarial Attack Detection

Whilst adversarial attack detection has received considerable attention,...
02/09/2023

Exploiting Certified Defences to Attack Randomised Smoothing

In guaranteeing that no adversarial examples exist within a bounded regi...
06/30/2022

MEAD: A Multi-Armed Approach for Evaluation of Adversarial Examples Detectors

Detection of adversarial examples has been a hot topic in the last years...
07/01/2021

Using Anomaly Feature Vectors for Detecting, Classifying and Warning of Outlier Adversarial Examples

We present DeClaW, a system for detecting, classifying, and warning of a...
03/23/2018

Detecting Adversarial Perturbations with Saliency

In this paper we propose a novel method for detecting adversarial exampl...
10/22/2022

ADDMU: Detection of Far-Boundary Adversarial Examples with Data and Model Uncertainty Estimation

Adversarial Examples Detection (AED) is a crucial defense technique agai...