Fooling End-to-end Speaker Verification by Adversarial Examples

01/10/2018
by   Felix Kreuk, et al.
0

Automatic speaker verification systems are increasingly used as the primary means to authenticate costumers. Recently, it has been proposed to train speaker verification systems using end-to-end deep neural models. In this paper, we show that such systems are vulnerable to adversarial example attack. Adversarial examples are generated by adding a peculiar noise to original speaker examples, in such a way that they are almost indistinguishable from the original examples by a human listener. Yet, the generated waveforms, which sound as speaker A can be used to fool such a system by claiming as if the waveforms were uttered by speaker B. We present white-box attacks on an end-to-end deep network that was either trained on YOHO or NTIMIT. We also present two black-box attacks: where the adversarial examples were generated with a system that was trained on YOHO, but the attack is on a system that was trained on NTIMIT; and when the adversarial examples were generated with a system that was trained on Mel-spectrum feature set, but the attack is on a system that was trained on MFCC. Results suggest that the accuracy of the attacked system was decreased and the false-positive rate was dramatically increased.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/19/2021

Direction-Aggregated Attack for Transferable Adversarial Examples

Deep neural networks are vulnerable to adversarial examples that are cra...
research
04/24/2022

Dictionary Attacks on Speaker Verification

In this paper, we propose dictionary attacks against speaker verificatio...
research
08/17/2023

A White-Box False Positive Adversarial Attack Method on Contrastive Loss-Based Offline Handwritten Signature Verification Models

In this paper, we tackle the challenge of white-box false positive adver...
research
11/02/2022

LMD: A Learnable Mask Network to Detect Adversarial Examples for Speaker Verification

Although the security of automatic speaker verification (ASV) is serious...
research
11/18/2016

LOTS about Attacking Deep Features

Deep neural networks provide state-of-the-art performance on various tas...
research
02/06/2022

Pipe Overflow: Smashing Voice Authentication for Fun and Profit

Recent years have seen a surge of popularity of acoustics-enabled person...
research
06/21/2019

Adversarial Examples to Fool Iris Recognition Systems

Adversarial examples have recently proven to be able to fool deep learni...

Please sign up or login with your details

Forgot password? Click here to reset