A Robust Approach for Securing Audio Classification Against Adversarial Attacks

04/24/2019
by   Mohammad Esmaeilpour, et al.
0

Adversarial audio attacks can be considered as a small perturbation unperceptive to human ears that is intentionally added to the audio signal and causes a machine learning model to make mistakes. This poses a security concern about the safety of machine learning models since the adversarial attacks can fool such models toward the wrong predictions. In this paper we first review some strong adversarial attacks that may affect both audio signals and their 2D representations and evaluate the resiliency of the most common machine learning model, namely deep learning models and support vector machines (SVM) trained on 2D audio representations such as short time Fourier transform (STFT), discrete wavelet transform (DWT) and cross recurrent plot (CRP) against several state-of-the-art adversarial attacks. Next, we propose a novel approach based on pre-processed DWT representation of audio signals and SVM to secure audio systems against adversarial attacks. The proposed architecture has several preprocessing modules for generating and enhancing spectrograms including dimension reduction and smoothing. We extract features from small patches of the spectrograms using speeded up robust feature (SURF) algorithm which are further used to generate a codebook using the K-Means++ algorithm. Finally, codewords are used to train a SVM on the codebook of the SURF-generated vectors. All these steps yield to a novel approach for audio classification that provides a good trade-off between accuracy and resilience. Experimental results on three environmental sound datasets show the competitive performance of proposed approach compared to the deep neural networks both in terms of accuracy and robustness against strong adversarial attacks.

READ FULL TEXT

page 1

page 7

research
07/27/2020

From Sound Representation to Model Robustness

In this paper, we demonstrate the extreme vulnerability of a residual de...
research
07/04/2019

Adversarial Attacks in Sound Event Classification

Adversarial attacks refer to a set of methods that perturb the input to ...
research
05/06/2021

Point Cloud Audio Processing

Most audio processing pipelines involve transformations that act on fixe...
research
10/22/2019

Cross-Representation Transferability of Adversarial Perturbations: From Spectrograms to Audio Waveforms

This paper shows the susceptibility of spectrogram-based audio classifie...
research
06/11/2020

Machine learning model to cluster and map tribocorrosion regimes in feature space

Tribocorrosion maps serve the purpose of identifying operating condition...
research
06/14/2020

Defending SVMs against Poisoning Attacks: the Hardness and DBSCAN Approach

Adversarial machine learning has attracted a great amount of attention i...
research
10/05/2020

Adversarial Boot Camp: label free certified robustness in one epoch

Machine learning models are vulnerable to adversarial attacks. One appro...

Please sign up or login with your details

Forgot password? Click here to reset