Adversarial Attacks in Sound Event Classification

07/04/2019
by   Vinod Subramanian, et al.
0

Adversarial attacks refer to a set of methods that perturb the input to a classification model in order to fool the classifier. In this paper we apply different gradient based adversarial attack algorithms on five deep learning models trained for sound event classification. Four of the models use mel-spectrogram input and one model uses raw audio input. The models represent standard architectures such as convolutional, recurrent and dense networks. The dataset used for training is the Freesound dataset released for task 2 of the DCASE 2018 challenge and the models used are from participants of the challenge who open sourced their code. Our experiments show that adversarial attacks can be generated with high confidence and low perturbation. In addition, we show that the adversarial attacks are very effective across the different models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/05/2023

Adversarial Attacks on Image Classification Models: FGSM and Patch Attacks and their Impact

This chapter introduces the concept of adversarial attacks on image clas...
research
09/10/2023

Machine Translation Models Stand Strong in the Face of Adversarial Attacks

Adversarial attacks expose vulnerabilities of deep learning models by in...
research
10/30/2021

AdvCodeMix: Adversarial Attack on Code-Mixed Data

Research on adversarial attacks are becoming widely popular in the recen...
research
06/14/2021

Now You See It, Now You Dont: Adversarial Vulnerabilities in Computational Pathology

Deep learning models are routinely employed in computational pathology (...
research
11/19/2021

Resilience from Diversity: Population-based approach to harden models against adversarial attacks

Traditional deep learning models exhibit intriguing vulnerabilities that...
research
11/24/2020

Trust but Verify: Assigning Prediction Credibility by Counterfactual Constrained Learning

Prediction credibility measures, in the form of confidence intervals or ...
research
04/24/2019

A Robust Approach for Securing Audio Classification Against Adversarial Attacks

Adversarial audio attacks can be considered as a small perturbation unpe...

Please sign up or login with your details

Forgot password? Click here to reset