Practical Hidden Voice Attacks against Speech and Speaker Recognition Systems

03/18/2019
by   Hadi Abdullah, et al.
0

Voice Processing Systems (VPSes), now widely deployed, have been made significantly more accurate through the application of recent advances in machine learning. However, adversarial machine learning has similarly advanced and has been used to demonstrate that VPSes are vulnerable to the injection of hidden commands - audio obscured by noise that is correctly recognized by a VPS but not by human beings. Such attacks, though, are often highly dependent on white-box knowledge of a specific machine learning model and limited to specific microphones and speakers, making their use across different acoustic hardware platforms (and thus their practicality) limited. In this paper, we break these dependencies and make hidden command attacks more practical through model-agnostic (blackbox) attacks, which exploit knowledge of the signal processing algorithms commonly used by VPSes to generate the data fed into machine learning systems. Specifically, we exploit the fact that multiple source audio samples have similar feature vectors when transformed by acoustic feature extraction algorithms (e.g., FFTs). We develop four classes of perturbations that create unintelligible audio and test them against 12 machine learning models, including 7 proprietary models (e.g., Google Speech API, Bing Speech API, IBM Speech API, Azure Speaker API, etc), and demonstrate successful attacks against all targets. Moreover, we successfully use our maliciously generated audio samples in multiple hardware configurations, demonstrating effectiveness across both models and real systems. In so doing, we demonstrate that domain-specific knowledge of audio signal processing represents a practical means of generating successful hidden voice command attacks.

READ FULL TEXT
research
01/23/2019

SirenAttack: Generating Adversarial Audio for End-to-End Acoustic Systems

Despite their immense popularity, deep learning-based acoustic systems a...
research
01/09/2023

Introducing Model Inversion Attacks on Automatic Speaker Recognition

Model inversion (MI) attacks allow to reconstruct average per-class repr...
research
10/11/2019

Hear "No Evil", See "Kenansville": Efficient and Transferable Black-Box Attacks on Speech Recognition and Voice Identification Systems

Automatic speech recognition and voice identification systems are being ...
research
08/08/2023

Evil Operation: Breaking Speaker Recognition with PaddingBack

Machine Learning as a Service (MLaaS) has gained popularity due to advan...
research
04/17/2019

Understanding the Effectiveness of Ultrasonic Microphone Jammer

Recent works have explained the principle of using ultrasonic transmissi...
research
06/22/2020

Light Commands: Laser-Based Audio Injection Attacks on Voice-Controllable Systems

We propose a new class of signal injection attacks on microphones by phy...
research
12/24/2021

SoK: A Study of the Security on Voice Processing Systems

As the use of Voice Processing Systems (VPS) continues to become more pr...

Please sign up or login with your details

Forgot password? Click here to reset