Audio Attacks and Defenses against AED Systems – A Practical Study

06/14/2021
by   Rodrigo dos Santos, et al.
0

Audio Event Detection (AED) Systems capture audio from the environment and employ some deep learning algorithms for detecting the presence of a specific sound of interest. In this paper, we evaluate deep learning-based AED systems against evasion attacks through adversarial examples. We run multiple security critical AED tasks, implemented as CNNs classifiers, and then generate audio adversarial examples using two different types of noise, namely background and white noise, that can be used by the adversary to evade detection. We also examine the robustness of existing third-party AED capable devices, such as Nest devices manufactured by Google, which run their own black-box deep learning models. We show that an adversary can focus on audio adversarial inputs to cause AED systems to misclassify, similarly to what has been previously done by works focusing on adversarial examples from the image domain. We then, seek to improve classifiers' robustness through countermeasures to the attacks. We employ adversarial training and a custom denoising technique. We show that these countermeasures, when applied to audio input, can be successful, either in isolation or in combination, generating relevant increases of nearly fifty percent in the performance of the classifiers when these are under attack.

READ FULL TEXT

page 1

page 5

page 9

research
09/30/2018

Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Neural Networks

Deep neural networks have been shown to be vulnerable to adversarial exa...
research
08/03/2021

On the Exploitability of Audio Machine Learning Pipelines to Surreptitious Adversarial Examples

Machine learning (ML) models are known to be vulnerable to adversarial e...
research
03/04/2021

WaveGuard: Understanding and Mitigating Audio Adversarial Examples

There has been a recent surge in adversarial attacks on deep learning ba...
research
05/06/2022

Robustness of Neural Architectures for Audio Event Detection

Traditionally, in Audio Recognition pipeline, noise is suppressed by the...
research
07/24/2021

Detecting Adversarial Examples Is (Nearly) As Hard As Classifying Them

Making classifiers robust to adversarial examples is hard. Thus, many de...
research
01/29/2019

Adversarial Examples Are a Natural Consequence of Test Error in Noise

Over the last few years, the phenomenon of adversarial examples --- mali...
research
11/15/2020

Audio-Visual Event Recognition through the lens of Adversary

As audio/visual classification models are widely deployed for sensitive ...

Please sign up or login with your details

Forgot password? Click here to reset