Detection of Adversarial Attacks and Characterization of Adversarial Subspace

10/26/2019
by   Mohammad Esmaeilpour, et al.
0

Adversarial attacks have always been a serious threat for any data-driven model. In this paper, we explore subspaces of adversarial examples in unitary vector domain, and we propose a novel detector for defending our models trained for environmental sound classification. We measure chordal distance between legitimate and malicious representation of sounds in unitary space of generalized Schur decomposition and show that their manifolds lie far from each other. Our front-end detector is a regularized logistic regression which discriminates eigenvalues of legitimate and adversarial spectrograms. The experimental results on three benchmarking datasets of environmental sounds represented by spectrograms reveal high detection rate of the proposed detector for eight types of adversarial attacks and outperforms other detection approaches.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/18/2019

Detecting Adversarial Attacks On Audio-Visual Speech Recognition

Adversarial attacks pose a threat to deep learning models. However, rese...
research
07/28/2021

Detecting AutoAttack Perturbations in the Frequency Domain

Recently, adversarial attacks on image classification networks by the Au...
research
11/24/2019

Robustness Metrics for Real-World Adversarial Examples

We explore metrics to evaluate the robustness of real-world adversarial ...
research
07/27/2020

From Sound Representation to Model Robustness

In this paper, we demonstrate the extreme vulnerability of a residual de...
research
01/20/2021

Adversarial Attacks for Tabular Data: Application to Fraud Detection and Imbalanced Data

Guaranteeing the security of transactional systems is a crucial priority...
research
05/02/2021

Intriguing Usage of Applicability Domain: Lessons from Cheminformatics Applied to Adversarial Learning

Defending machine learning models from adversarial attacks is still a ch...
research
02/09/2022

Adversarial Detection without Model Information

Most prior state-of-the-art adversarial detection works assume that the ...

Please sign up or login with your details

Forgot password? Click here to reset