Dompteur: Taming Audio Adversarial Examples

02/10/2021
by   Thorsten Eisenhofer, et al.
0

Adversarial examples seem to be inevitable. These specifically crafted inputs allow attackers to arbitrarily manipulate machine learning systems. Even worse, they often seem harmless to human observers. In our digital society, this poses a significant threat. For example, Automatic Speech Recognition (ASR) systems, which serve as hands-free interfaces to many kinds of systems, can be attacked with inputs incomprehensible for human listeners. The research community has unsuccessfully tried several approaches to tackle this problem. In this paper we propose a different perspective: We accept the presence of adversarial examples against ASR systems, but we require them to be perceivable by human listeners. By applying the principles of psychoacoustics, we can remove semantically irrelevant information from the ASR input and train a model that resembles human perception more closely. We implement our idea in a tool named Dompteur and demonstrate that our augmented system, in contrast to an unmodified baseline, successfully focuses on perceptible ranges of the input signal. This change forces adversarial examples into the audible range, while using minimal computational overhead and preserving benign performance. To evaluate our approach, we construct an adaptive attacker, which actively tries to avoid our augmentations and demonstrate that adversarial examples from this attacker remain clearly perceivable. Finally, we substantiate our claims by performing a hearing test with crowd-sourced human listeners.

READ FULL TEXT

page 6

page 9

research
12/26/2018

A Multiversion Programming Inspired Approach to Detecting Audio Adversarial Examples

Adversarial examples (AEs) are crafted by adding human-imperceptible per...
research
08/05/2019

Robust Over-the-Air Adversarial Examples Against Automatic Speech Recognition Systems

Automatic speech recognition (ASR) systems are possible to fool via targ...
research
08/05/2019

Imperio: Robust Over-the-Air Adversarial Examples for Automatic Speech Recognition Systems

Automatic speech recognition (ASR) systems are possible to fool via targ...
research
09/28/2018

Characterizing Audio Adversarial Examples Using Temporal Dependency

Recent studies have highlighted adversarial examples as a ubiquitous thr...
research
05/24/2020

Detecting Adversarial Examples for Speech Recognition via Uncertainty Quantification

Machine learning systems and also, specifically, automatic speech recogn...
research
10/26/2022

There is more than one kind of robustness: Fooling Whisper with adversarial examples

Whisper is a recent Automatic Speech Recognition (ASR) model displaying ...
research
06/18/2021

Bad Characters: Imperceptible NLP Attacks

Several years of research have shown that machine-learning systems are v...

Please sign up or login with your details

Forgot password? Click here to reset