SA: Sliding attack for synthetic speech detection with resistance to clipping and self-splicing

08/27/2022
by   Deng JiaCheng, et al.
0

Deep neural networks are vulnerable to adversarial examples that mislead models with imperceptible perturbations. In audio, although adversarial examples have achieved incredible attack success rates on white-box settings and black-box settings, most existing adversarial attacks are constrained by the input length. A More practical scenario is that the adversarial examples must be clipped or self-spliced and input into the black-box model. Therefore, it is necessary to explore how to improve transferability in different input length settings. In this paper, we take the synthetic speech detection task as an example and consider two representative SOTA models. We observe that the gradients of fragments with the same sample value are similar in different models via analyzing the gradients obtained by feeding samples into the model after cropping or self-splicing. Inspired by the above observation, we propose a new adversarial attack method termed sliding attack. Specifically, we make each sampling point aware of gradients at different locations, which can simulate the situation where adversarial examples are input to black-box models with varying input lengths. Therefore, instead of using the current gradient directly in each iteration of the gradient calculation, we go through the following three steps. First, we extract subsegments of different lengths using sliding windows. We then augment the subsegments with data from the adjacent domains. Finally, we feed the sub-segments into different models to obtain aggregate gradients to update adversarial examples. Empirical results demonstrate that our method could significantly improve the transferability of adversarial examples after clipping or self-splicing. Besides, our method could also enhance the transferability between models based on different features.

READ FULL TEXT
research
03/29/2021

Enhancing the Transferability of Adversarial Attacks through Variance Tuning

Deep neural networks are vulnerable to adversarial examples that mislead...
research
07/02/2020

Generating Adversarial Examples withControllable Non-transferability

Adversarial attacks against Deep Neural Networks have been widely studie...
research
03/23/2022

Input-specific Attention Subnetworks for Adversarial Detection

Self-attention heads are characteristic of Transformer models and have b...
research
05/19/2022

Enhancing the Transferability of Adversarial Examples via a Few Queries

Due to the vulnerability of deep neural networks, the black-box attack h...
research
01/01/2022

Adversarial Attack via Dual-Stage Network Erosion

Deep neural networks are vulnerable to adversarial examples, which can f...
research
08/20/2023

Boosting Adversarial Transferability by Block Shuffle and Rotation

Adversarial examples mislead deep neural networks with imperceptible per...
research
10/21/2020

Boosting Gradient for White-Box Adversarial Attacks

Deep neural networks (DNNs) are playing key roles in various artificial ...

Please sign up or login with your details

Forgot password? Click here to reset