Evil Operation: Breaking Speaker Recognition with PaddingBack

08/08/2023
by   Zhe Ye, et al.
0

Machine Learning as a Service (MLaaS) has gained popularity due to advancements in machine learning. However, untrusted third-party platforms have raised concerns about AI security, particularly in backdoor attacks. Recent research has shown that speech backdoors can utilize transformations as triggers, similar to image backdoors. However, human ears easily detect these transformations, leading to suspicion. In this paper, we introduce PaddingBack, an inaudible backdoor attack that utilizes malicious operations to make poisoned samples indistinguishable from clean ones. Instead of using external perturbations as triggers, we exploit the widely used speech signal operation, padding, to break speaker recognition systems. Our experimental results demonstrate the effectiveness of the proposed approach, achieving a significantly high attack success rate while maintaining a high rate of benign accuracy. Furthermore, PaddingBack demonstrates the ability to resist defense methods while maintaining its stealthiness against human perception. The results of the stealthiness experiment have been made available at https://nbufabio25.github.io/paddingback/.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/07/2020

Learning to fool the speaker recognition

Due to the widespread deployment of fingerprint/face/speaker recognition...
research
05/21/2020

Inaudible Adversarial Perturbations for Targeted Attack in Speaker Recognition

Speaker recognition is a popular topic in biometric authentication and m...
research
03/18/2019

Practical Hidden Voice Attacks against Speech and Speaker Recognition Systems

Voice Processing Systems (VPSes), now widely deployed, have been made si...
research
09/04/2023

BadSQA: Stealthy Backdoor Attacks Using Presence Events as Triggers in Non-Intrusive Speech Quality Assessment

Non-Intrusive speech quality assessment (NISQA) has gained significant a...
research
03/04/2020

Real-time, Universal, and Robust Adversarial Attacks Against Speaker Recognition Systems

As the popularity of voice user interface (VUI) exploded in recent years...
research
07/13/2020

SoK: The Faults in our ASRs: An Overview of Attacks against Automatic Speech Recognition and Speaker Identification Systems

Speech and speaker recognition systems are employed in a variety of appl...
research
04/10/2021

A Low-Cost Attack against the hCaptcha System

CAPTCHAs are a defense mechanism to prevent malicious bot programs from ...

Please sign up or login with your details

Forgot password? Click here to reset