Defense for Black-box Attacks on Anti-spoofing Models by Self-Supervised Learning

06/05/2020
by   Haibin Wu, et al.
0

High-performance anti-spoofing models for automatic speaker verification (ASV), have been widely used to protect ASV by identifying and filtering spoofing audio that is deliberately generated by text-to-speech, voice conversion, audio replay, etc. However, it has been shown that high-performance anti-spoofing models are vulnerable to adversarial attacks. Adversarial attacks, that are indistinguishable from original data but result in the incorrect predictions, are dangerous for anti-spoofing models and not in dispute we should detect them at any cost. To explore this issue, we proposed to employ Mockingjay, a self-supervised learning based model, to protect anti-spoofing models against adversarial attacks in the black-box scenario. Self-supervised learning models are effective in improving downstream task performance like phone classification or ASR. However, their effect in defense for adversarial attacks has not been explored yet. In this work, we explore the robustness of self-supervised learned high-level representations by using them in the defense against adversarial attacks. A layerwise noise to signal ratio (LNSR) is proposed to quantize and measure the effectiveness of deep models in countering adversarial noise. Experimental results on the ASVspoof 2019 dataset demonstrate that high-level representations extracted by Mockingjay can prevent the transferability of adversarial examples, and successfully counter black-box attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/06/2020

Defense against adversarial attacks on spoofing countermeasures of ASV

Various forefront countermeasure methods for automatic speaker verificat...
research
10/19/2019

Adversarial Attacks on Spoofing Countermeasures of automatic speaker verification

High-performance spoofing countermeasure systems for automatic speaker v...
research
06/13/2023

Malafide: a novel adversarial convolutive noise attack against deepfake and spoofing detection systems

We present Malafide, a universal adversarial attack against automatic sp...
research
09/17/2022

Watch What You Pretrain For: Targeted, Transferable Adversarial Examples on Self-Supervised Speech Recognition models

A targeted adversarial attack produces audio samples that can force an A...
research
09/13/2019

White-Box Adversarial Defense via Self-Supervised Data Estimation

In this paper, we study the problem of how to defend classifiers against...
research
05/22/2023

The defender's perspective on automatic speaker verification: An overview

Automatic speaker verification (ASV) plays a critical role in security-s...
research
05/24/2023

Spoofing Attacker Also Benefits from Self-Supervised Pretrained Model

Large-scale pretrained models using self-supervised learning have report...

Please sign up or login with your details

Forgot password? Click here to reset