On the Exploitability of Audio Machine Learning Pipelines to Surreptitious Adversarial Examples

08/03/2021
by   Adelin Travers, et al.
8

Machine learning (ML) models are known to be vulnerable to adversarial examples. Applications of ML to voice biometrics authentication are no exception. Yet, the implications of audio adversarial examples on these real-world systems remain poorly understood given that most research targets limited defenders who can only listen to the audio samples. Conflating detectability of an attack with human perceptibility, research has focused on methods that aim to produce imperceptible adversarial examples which humans cannot distinguish from the corresponding benign samples. We argue that this perspective is coarse for two reasons: 1. Imperceptibility is impossible to verify; it would require an experimental process that encompasses variations in listener training, equipment, volume, ear sensitivity, types of background noise etc, and 2. It disregards pipeline-based detection clues that realistic defenders leverage. This results in adversarial examples that are ineffective in the presence of knowledgeable defenders. Thus, an adversary only needs an audio sample to be plausible to a human. We thus introduce surreptitious adversarial examples, a new class of attacks that evades both human and pipeline controls. In the white-box setting, we instantiate this class with a joint, multi-stage optimization attack. Using an Amazon Mechanical Turk user study, we show that this attack produces audio samples that are more surreptitious than previous attacks that aim solely for imperceptibility. Lastly we show that surreptitious adversarial examples are challenging to develop in the black-box setting.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/04/2021

Audio Adversarial Examples: Attacks Using Vocal Masks

We construct audio adversarial examples on automatic Speech-To-Text syst...
research
10/31/2019

Adversarial Music: Real World Audio Adversary Against Wake-word Detection System

Voice Assistants (VAs) such as Amazon Alexa or Google Assistant rely on ...
research
06/14/2021

Audio Attacks and Defenses against AED Systems – A Practical Study

Audio Event Detection (AED) Systems capture audio from the environment a...
research
02/06/2022

Pipe Overflow: Smashing Voice Authentication for Fun and Profit

Recent years have seen a surge of popularity of acoustics-enabled person...
research
02/08/2023

Shortcut Detection with Variational Autoencoders

For real-world applications of machine learning (ML), it is essential th...
research
08/23/2023

SEA: Shareable and Explainable Attribution for Query-based Black-box Attacks

Machine Learning (ML) systems are vulnerable to adversarial examples, pa...
research
02/14/2019

Can Intelligent Hyperparameter Selection Improve Resistance to Adversarial Examples?

Convolutional Neural Networks and Deep Learning classification systems i...

Please sign up or login with your details

Forgot password? Click here to reset