Towards Stealthy Backdoor Attacks against Speech Recognition via Elements of Sound

07/17/2023
by   Hanbo Cai, et al.
0

Deep neural networks (DNNs) have been widely and successfully adopted and deployed in various applications of speech recognition. Recently, a few works revealed that these models are vulnerable to backdoor attacks, where the adversaries can implant malicious prediction behaviors into victim models by poisoning their training process. In this paper, we revisit poison-only backdoor attacks against speech recognition. We reveal that existing methods are not stealthy since their trigger patterns are perceptible to humans or machine detection. This limitation is mostly because their trigger patterns are simple noises or separable and distinctive clips. Motivated by these findings, we propose to exploit elements of sound (e.g., pitch and timbre) to design more stealthy yet effective poison-only backdoor attacks. Specifically, we insert a short-duration high-pitched signal as the trigger and increase the pitch of remaining audio clips to `mask' it for designing stealthy pitch-based triggers. We manipulate timbre features of victim audios to design the stealthy timbre-based attack and design a voiceprint selection module to facilitate the multi-backdoor attack. Our attacks can generate more `natural' poisoned samples and therefore are more stealthy. Extensive experiments are conducted on benchmark datasets, which verify the effectiveness of our attacks under different settings (e.g., all-to-one, all-to-all, clean-label, physical, and multi-backdoor settings) and their stealthiness. The code for reproducing main experiments are available at <https://github.com/HanboCai/BadSpeech_SoE>.

READ FULL TEXT

page 1

page 7

page 8

page 9

page 10

page 13

research
11/02/2022

BATT: Backdoor Attack with Transformation-based Triggers

Deep neural networks (DNNs) are vulnerable to backdoor attacks. The back...
research
07/30/2021

Can You Hear It? Backdoor Attacks via Ultrasonic Triggers

Deep neural networks represent a powerful option for many real-world app...
research
02/07/2023

SCALE-UP: An Efficient Black-box Input-level Backdoor Detection via Analyzing Scaled Prediction Consistency

Deep neural networks (DNNs) are vulnerable to backdoor attacks, where ad...
research
03/30/2023

Adversarial Attack and Defense for Dehazing Networks

The research on single image dehazing task has been widely explored. How...
research
01/26/2023

Distilling Cognitive Backdoor Patterns within an Image

This paper proposes a simple method to distill and detect backdoor patte...
research
09/27/2022

Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protection

Deep neural networks (DNNs) have demonstrated their superiority in pract...
research
04/19/2022

Indiscriminate Data Poisoning Attacks on Neural Networks

Data poisoning attacks, in which a malicious adversary aims to influence...

Please sign up or login with your details

Forgot password? Click here to reset