Improving the Adversarial Robustness for Speaker Verification by Self-Supervised Learning

06/01/2021
by   Haibin Wu, et al.
0

Previous works have shown that automatic speaker verification (ASV) is seriously vulnerable to malicious spoofing attacks, such as replay, synthetic speech, and recently emerged adversarial attacks. Great efforts have been dedicated to defending ASV against replay and synthetic speech; however, only a few approaches have been explored to deal with adversarial attacks. All the existing approaches to tackle adversarial attacks for ASV require the knowledge for adversarial samples generation, but it is impractical for defenders to know the exact attack algorithms that are applied by the in-the-wild attackers. This work is among the first to perform adversarial defense for ASV without knowing the specific attack algorithms. Inspired by self-supervised learning models (SSLMs) that possess the merits of alleviating the superficial noise in the inputs and reconstructing clean samples from the interrupted ones, this work regards adversarial perturbations as one kind of noise and conducts adversarial defense for ASV by SSLMs. Specifically, we propose to perform adversarial defense from two perspectives: 1) adversarial perturbation purification and 2) adversarial perturbation detection. Experimental results show that our detection module effectively shields the ASV by detecting adversarial samples with an accuracy of around 80 evaluating the adversarial defense performance for ASV, this work also formalizes evaluation metrics for adversarial defense considering both purification and detection based approaches into account. We sincerely encourage future works to benchmark their approaches based on the proposed evaluation framework.

READ FULL TEXT

page 1

page 5

research
02/14/2021

Adversarial defense for automatic speaker verification by cascaded self-supervised learning models

Automatic speaker verification (ASV) is one of the core technologies in ...
research
06/11/2020

Investigating Robustness of Adversarial Samples Detection for Automatic Speaker Verification

Recently adversarial attacks on automatic speaker verification (ASV) sys...
research
05/22/2023

The defender's perspective on automatic speaker verification: An overview

Automatic speaker verification (ASV) plays a critical role in security-s...
research
06/15/2021

Voting for the right answer: Adversarial defense for speaker verification

Automatic speaker verification (ASV) is a well developed technology for ...
research
09/10/2019

Learning to Disentangle Robust and Vulnerable Features for Adversarial Detection

Although deep neural networks have shown promising performances on vario...
research
09/13/2019

White-Box Adversarial Defense via Self-Supervised Data Estimation

In this paper, we study the problem of how to defend classifiers against...
research
08/18/2020

Adversarial Attack and Defense Strategies for Deep Speaker Recognition Systems

Robust speaker recognition, including in the presence of malicious attac...

Please sign up or login with your details

Forgot password? Click here to reset