Adversarial Attacks and Defenses for Speaker Identification Systems

01/22/2021
by   Sonal Joshi, et al.
0

Research in automatic speaker recognition (SR) has been undertaken for several decades, reaching great performance. However, researchers discovered potential loopholes in these technologies like spoofing attacks. Quite recently, a new genre of attack, termed adversarial attacks, has been proved to be fatal in computer vision and it is vital to study their effects on SR systems. This paper examines how state-of-the-art speaker identification (SID) systems are vulnerable to adversarial attacks and how to defend against them. We investigated adversarial attacks common in the literature like fast gradient sign method (FGSM), iterative-FGSM / basic iterative method (BIM) and Carlini-Wagner (CW). Furthermore, we propose four pre-processing defenses against these attacks - randomized smoothing, DefenseGAN, variational autoencoder (VAE) and WaveGAN vocoder. We found that SID is extremely vulnerable under Iterative FGSM and CW attacks. Randomized smoothing defense robustified the system for imperceptible BIM and CW attacks recovering classification accuracies  97 (DefenseGAN, VAE and WaveGAN) project adversarial examples (outside manifold) back into the clean manifold. In the case that attacker cannot adapt the attack to the defense (black-box defense), WaveGAN performed the best, being close to clean condition (Accuracy>97 defense - assuming the attacker has access to the defense model (white-box defense), VAE and WaveGAN protection dropped significantly-50 for CW attack. To counteract this,we combined randomized smoothing with VAE or WaveGAN. We found that smoothing followed by WaveGAN vocoder was the most effective defense overall. As a black-box defense, it provides 93 accuracy. As white-box defense, accuracy only degraded for iterative attacks with perceptible perturbations (L>=0.01).

READ FULL TEXT
research
10/19/2019

Adversarial Attacks on Spoofing Countermeasures of automatic speaker verification

High-performance spoofing countermeasure systems for automatic speaker v...
research
12/17/2018

Defense-VAE: A Fast and Accurate Defense against Adversarial Attacks

Deep neural networks (DNNs) have been enormously successful across a var...
research
04/28/2022

Randomized Smoothing under Attack: How Good is it in Pratice?

Randomized smoothing is a recent and celebrated solution to certify the ...
research
10/31/2020

MAD-VAE: Manifold Awareness Defense Variational Autoencoder

Although deep generative models such as Defense-GAN and Defense-VAE have...
research
02/05/2018

Robust Pre-Processing: A Robust Defense Method Against Adversary Attack

Deep learning algorithms and networks are vulnerable to perturbed inputs...
research
08/18/2020

Adversarial Attack and Defense Strategies for Deep Speaker Recognition Systems

Robust speaker recognition, including in the presence of malicious attac...
research
11/03/2019

Who is Real Bob? Adversarial Attacks on Speaker Recognition Systems

Speaker recognition (SR) is widely used in our daily life as a biometric...

Please sign up or login with your details

Forgot password? Click here to reset