Inaudible Adversarial Perturbations for Targeted Attack in Speaker Recognition

05/21/2020
by   Qing Wang, et al.
0

Speaker recognition is a popular topic in biometric authentication and many deep learning approaches have achieved extraordinary performances. However, it has been shown in both image and speech applications that deep neural networks are vulnerable to adversarial examples. In this study, we aim to exploit this weakness to perform targeted adversarial attacks against the x-vector based speaker recognition system. We propose to generate inaudible adversarial perturbations achieving targeted white-box attacks to speaker recognition system based on the psychoacoustic principle of frequency masking. Specifically, we constrict the perturbation under the masking threshold of original audio, instead of using a common l_p norm to measure the perturbations. Experiments on Aishell-1 corpus show that our approach yields up to 98.5 retaining indistinguishable attribute to listeners. Furthermore, we also achieve an effective speaker attack when applying the proposed approach to a completely irrelevant waveform, such as music.

READ FULL TEXT
research
04/07/2020

Universal Adversarial Perturbations Generative Network for Speaker Recognition

Attacking deep learning based biometric systems has drawn more and more ...
research
11/17/2020

FoolHD: Fooling speaker identification by Highly imperceptible adversarial Disturbances

Speaker identification models are vulnerable to carefully designed adver...
research
05/19/2021

Attack on practical speaker verification system using universal adversarial perturbations

In authentication scenarios, applications of practical speaker verificat...
research
08/08/2023

Evil Operation: Breaking Speaker Recognition with PaddingBack

Machine Learning as a Service (MLaaS) has gained popularity due to advan...
research
04/06/2021

Exploring Targeted Universal Adversarial Perturbations to End-to-end ASR Models

Although end-to-end automatic speech recognition (e2e ASR) models are wi...
research
12/28/2021

Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks

Minimal adversarial perturbations added to inputs have been shown to be ...
research
07/11/2019

Why Blocking Targeted Adversarial Perturbations Impairs the Ability to Learn

Despite their accuracy, neural network-based classifiers are still prone...

Please sign up or login with your details

Forgot password? Click here to reset