Practical Attacks on Voice Spoofing Countermeasures

07/30/2021
by   Andre Kassis, et al.
0

Voice authentication has become an integral part in security-critical operations, such as bank transactions and call center conversations. The vulnerability of automatic speaker verification systems (ASVs) to spoofing attacks instigated the development of countermeasures (CMs), whose task is to tell apart bonafide and spoofed speech. Together, ASVs and CMs form today's voice authentication platforms, advertised as an impregnable access control mechanism. We develop the first practical attack on CMs, and show how a malicious actor may efficiently craft audio samples to bypass voice authentication in its strictest form. Previous works have primarily focused on non-proactive attacks or adversarial strategies against ASVs that do not produce speech in the victim's voice. The repercussions of our attacks are far more severe, as the samples we generate sound like the victim, eliminating any chance of plausible deniability. Moreover, the few existing adversarial attacks against CMs mistakenly optimize spoofed speech in the feature space and do not take into account the existence of ASVs, resulting in inferior synthetic audio that fails in realistic settings. We eliminate these obstacles through our key technical contribution: a novel joint loss function that enables mounting advanced adversarial attacks against combined ASV/CM deployments directly in the time domain. Our adversarials achieve concerning black-box success rates against state-of-the-art authentication platforms (up to 93.57%). Finally, we perform the first targeted, over-telephony-network attack on CMs, bypassing several challenges and enabling various potential threats, given the increased use of voice biometrics in call centers. Our results call into question the security of modern voice authentication systems in light of the real threat of attackers bypassing these measures to gain access to users' most valuable resources.

READ FULL TEXT
research
01/10/2022

A Practical Guide to Logical Access Voice Presentation Attack Detection

Voice-based human-machine interfaces with an automatic speaker verificat...
research
10/19/2019

Adversarial Attacks on Spoofing Countermeasures of automatic speaker verification

High-performance spoofing countermeasure systems for automatic speaker v...
research
06/13/2023

Malafide: a novel adversarial convolutive noise attack against deepfake and spoofing detection systems

We present Malafide, a universal adversarial attack against automatic sp...
research
06/22/2020

Light Commands: Laser-Based Audio Injection Attacks on Voice-Controllable Systems

We propose a new class of signal injection attacks on microphones by phy...
research
12/24/2021

SoK: A Study of the Security on Voice Processing Systems

As the use of Voice Processing Systems (VPS) continues to become more pr...
research
07/21/2021

A Tandem Framework Balancing Privacy and Security for Voice User Interfaces

Speech synthesis, voice cloning, and voice conversion techniques present...
research
10/11/2019

Hear "No Evil", See "Kenansville": Efficient and Transferable Black-Box Attacks on Speech Recognition and Voice Identification Systems

Automatic speech recognition and voice identification systems are being ...

Please sign up or login with your details

Forgot password? Click here to reset