Universal Adversarial Audio Perturbations

08/08/2019
by   Sajjad Abdoli, et al.
0

We demonstrate the existence of universal adversarial perturbations, which can fool a family of audio processing architectures, for both targeted and untargeted attacks. To the best of our knowledge, this is the first study on generating universal adversarial perturbations for audio processing systems. We propose two methods for finding such perturbations. The first method is based on an iterative, greedy approach that is well-known in computer vision: it aggregates small perturbations to the input so as to push it to the decision boundary. The second method, which is the main technical contribution of this work, is a novel penalty formulation, which finds targeted and untargeted universal adversarial perturbations. Differently from the greedy approach, the penalty method minimizes an appropriate objective function on a batch of samples. Therefore, it produces more successful attacks when the number of training samples is limited. Moreover, we provide a proof that the proposed penalty method theoretically converges to a solution that corresponds to universal adversarial perturbations. We report comprehensive experiments, showing attack success rates higher than 91.1 untargeted attacks, respectively.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/31/2021

On Distinctive Properties of Universal Perturbations

We identify properties of universal adversarial perturbations (UAPs) tha...
research
12/08/2020

Locally optimal detection of stochastic targeted universal adversarial perturbations

Deep learning image classifiers are known to be vulnerable to small adve...
research
04/26/2020

Enabling Fast and Universal Audio Adversarial Attack Using Generative Model

Recently, the vulnerability of DNN-based audio systems to adversarial at...
research
12/10/2019

Appending Adversarial Frames for Universal Video Attack

There have been many efforts in attacking image classification models wi...
research
10/07/2021

One Thing to Fool them All: Generating Interpretable, Universal, and Physically-Realizable Adversarial Features

It is well understood that modern deep networks are vulnerable to advers...
research
11/17/2020

FoolHD: Fooling speaker identification by Highly imperceptible adversarial Disturbances

Speaker identification models are vulnerable to carefully designed adver...
research
06/19/2022

A Universal Adversarial Policy for Text Classifiers

Discovering the existence of universal adversarial perturbations had lar...

Please sign up or login with your details

Forgot password? Click here to reset