Neural Predictor for Black-Box Adversarial Attacks on Speech Recognition

03/18/2022
by   Marie Biolková, et al.
0

Recent works have revealed the vulnerability of automatic speech recognition (ASR) models to adversarial examples (AEs), i.e., small perturbations that cause an error in the transcription of the audio signal. Studying audio adversarial attacks is therefore the first step towards robust ASR. Despite the significant progress made in attacking audio examples, the black-box attack remains challenging because only the hard-label information of transcriptions is provided. Due to this limited information, existing black-box methods often require an excessive number of queries to attack a single audio example. In this paper, we introduce NP-Attack, a neural predictor-based method, which progressively evolves the search towards a small adversarial perturbation. Given a perturbation direction, our neural predictor directly estimates the smallest perturbation that causes a mistranscription. In particular, it enables NP-Attack to accurately learn promising perturbation directions via gradient-based optimization. Experimental results show that NP-Attack achieves competitive results with other state-of-the-art black-box adversarial attacks while requiring a significantly smaller number of queries. The code of NP-Attack is available online.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/20/2018

Targeted Adversarial Examples for Black Box Audio Systems

The application of deep recurrent networks to audio transcription has le...
research
09/13/2023

PhantomSound: Black-Box, Query-Efficient Audio Adversarial Attack via Split-Second Phoneme Injection

In this paper, we propose PhantomSound, a query-efficient black-box atta...
research
04/10/2019

Black-box Adversarial Attacks on Video Recognition Models

Deep neural networks (DNNs) are known for their vulnerability to adversa...
research
04/18/2023

Towards the Transferable Audio Adversarial Attack via Ensemble Methods

In recent years, deep learning (DL) models have achieved significant pro...
research
10/30/2021

AdvCodeMix: Adversarial Attack on Code-Mixed Data

Research on adversarial attacks are becoming widely popular in the recen...
research
10/01/2019

An Efficient and Margin-Approaching Zero-Confidence Adversarial Attack

There are two major paradigms of white-box adversarial attacks that atte...
research
08/23/2023

LCANets++: Robust Audio Classification using Multi-layer Neural Networks with Lateral Competition

Audio classification aims at recognizing audio signals, including speech...

Please sign up or login with your details

Forgot password? Click here to reset