FoolHD: Fooling speaker identification by Highly imperceptible adversarial Disturbances

11/17/2020
by   Ali Shahin Shamsabadi, et al.
5

Speaker identification models are vulnerable to carefully designed adversarial perturbations of their input signals that induce misclassification. In this work, we propose a white-box steganography-inspired adversarial attack that generates imperceptible adversarial perturbations against a speaker identification model. Our approach, FoolHD, uses a Gated Convolutional Autoencoder that operates in the DCT domain and is trained with a multi-objective loss function, in order to generate and conceal the adversarial perturbation within the original audio files. In addition to hindering speaker identification performance, this multi-objective loss accounts for human perception through a frame-wise cosine similarity between MFCC feature vectors extracted from the original and adversarial audio files. We validate the effectiveness of FoolHD with a 250-speaker identification x-vector network, trained using VoxCeleb, in terms of accuracy, success rate, and imperceptibility. Our results show that FoolHD generates highly imperceptible adversarial audio files (average PESQ scores above 4.30), while achieving a success rate of 99.6 for untargeted and targeted settings, respectively.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/21/2020

Inaudible Adversarial Perturbations for Targeted Attack in Speaker Recognition

Speaker recognition is a popular topic in biometric authentication and m...
research
04/07/2020

Universal Adversarial Perturbations Generative Network for Speaker Recognition

Attacking deep learning based biometric systems has drawn more and more ...
research
08/08/2019

Universal Adversarial Audio Perturbations

We demonstrate the existence of universal adversarial perturbations, whi...
research
05/19/2021

Attack on practical speaker verification system using universal adversarial perturbations

In authentication scenarios, applications of practical speaker verificat...
research
07/01/2019

Cosine similarity-based adversarial process

An adversarial process between two deep neural networks is a promising a...
research
10/30/2022

Symmetric Saliency-based Adversarial Attack To Speaker Identification

Adversarial attack approaches to speaker identification either need high...

Please sign up or login with your details

Forgot password? Click here to reset