FAAG: Fast Adversarial Audio Generation through Interactive Attack Optimisation

02/11/2022
by   Yuantian Miao, et al.
0

Automatic Speech Recognition services (ASRs) inherit deep neural networks' vulnerabilities like crafted adversarial examples. Existing methods often suffer from low efficiency because the target phases are added to the entire audio sample, resulting in high demand for computational resources. This paper proposes a novel scheme named FAAG as an iterative optimization-based method to generate targeted adversarial examples quickly. By injecting the noise over the beginning part of the audio, FAAG generates adversarial audio in high quality with a high success rate timely. Specifically, we use audio's logits output to map each character in the transcription to an approximate position of the audio's frame. Thus, an adversarial example can be generated by FAAG in approximately two minutes using CPUs only and around ten seconds with one GPU while maintaining an average success rate over 85 method can speed up around 60 adversarial example generation process. Furthermore, we found that appending benign audio to any suspicious examples can effectively defend against the targeted adversarial attack. We hope that this work paves the way for inventing new adversarial attacks against speech recognition with computational constraints.

READ FULL TEXT

page 4

page 10

page 13

research
01/05/2018

Audio Adversarial Examples: Targeted Attacks on Speech-to-Text

We construct targeted audio adversarial examples on automatic speech rec...
research
01/26/2019

Adversarial attack on Speech-to-Text Recognition Models

Recent studies have highlighted audio adversarial examples as a ubiquito...
research
01/26/2019

Towards Weighted-Sampling Audio Adversarial Example Attack

Recent studies have highlighted audio adversarial examples as a ubiquito...
research
11/19/2022

Phonemic Adversarial Attack against Audio Recognition in Real World

Recently, adversarial attacks for audio recognition have attracted much ...
research
06/15/2022

Fast and Reliable Evaluation of Adversarial Robustness with Minimum-Margin Attack

The AutoAttack (AA) has been the most reliable method to evaluate advers...
research
10/14/2020

Towards Resistant Audio Adversarial Examples

Adversarial examples tremendously threaten the availability and integrit...
research
12/25/2018

Noise Flooding for Detecting Audio Adversarial Examples Against Automatic Speech Recognition

Neural models enjoy widespread use across a variety of tasks and have gr...

Please sign up or login with your details

Forgot password? Click here to reset