Investigating Robustness of Adversarial Samples Detection for Automatic Speaker Verification

06/11/2020
by   Xu Li, et al.
0

Recently adversarial attacks on automatic speaker verification (ASV) systems attracted widespread attention as they pose severe threats to ASV systems. However, methods to defend against such attacks are limited. Existing approaches mainly focus on retraining ASV systems with adversarial data augmentation. Also, countermeasure robustness against different attack settings are insufficiently investigated. Orthogonal to prior approaches, this work proposes to defend ASV systems against adversarial attacks with a separate detection network, rather than augmenting adversarial data into ASV training. A VGG-like binary classification detector is introduced and demonstrated to be effective on detecting adversarial samples. To investigate detector robustness in a realistic defense scenario where unseen attack settings exist, we analyze various attack settings and observe that the detector is robust (6.27% EER_det degradation in the worst case) against unseen substitute ASV systems, but it has weak robustness (50.37% EER_det degradation in the worst case) against unseen perturbation methods. The weak robustness against unseen perturbation methods shows a direction for developing stronger countermeasures.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/01/2021

Improving the Adversarial Robustness for Speaker Verification by Self-Supervised Learning

Previous works have shown that automatic speaker verification (ASV) is s...
research
11/08/2019

Adversarial Attacks on GMM i-vector based Speaker Verification Systems

This work investigates the vulnerability of Gaussian Mix-ture Model (GMM...
research
03/19/2022

Distinguishing Non-natural from Natural Adversarial Samples for More Robust Pre-trained Language Model

Recently, the problem of robustness of pre-trained language models (PrLM...
research
02/11/2022

On the Detection of Adaptive Adversarial Attacks in Speaker Verification Systems

Speaker verification systems have been widely used in smart phones and I...
research
03/24/2022

A Perturbation Constrained Adversarial Attack for Evaluating the Robustness of Optical Flow

Recent optical flow methods are almost exclusively judged in terms of ac...
research
04/22/2023

Detecting Adversarial Faces Using Only Real Face Self-Perturbations

Adversarial attacks aim to disturb the functionality of a target system ...
research
05/23/2023

Expressive Losses for Verified Robustness via Convex Combinations

In order to train networks for verified adversarial robustness, previous...

Please sign up or login with your details

Forgot password? Click here to reset