Deep Learning and Music Adversaries

07/16/2015
by   Corey Kereliuk, et al.
0

An adversary is essentially an algorithm intent on making a classification system perform in some particular way given an input, e.g., increase the probability of a false negative. Recent work builds adversaries for deep learning systems applied to image object recognition, which exploits the parameters of the system to find the minimal perturbation of the input image such that the network misclassifies it with high confidence. We adapt this approach to construct and deploy an adversary of deep learning systems applied to music content analysis. In our case, however, the input to the systems is magnitude spectral frames, which requires special care in order to produce valid input audio signals from network-derived perturbations. For two different train-test partitionings of two benchmark datasets, and two different deep architectures, we find that this adversary is very effective in defeating the resulting systems. We find the convolutional networks are more robust, however, compared with systems based on a majority vote over individually classified audio frames. Furthermore, we integrate the adversary into the training of new deep systems, but do not find that this improves their resilience against the same adversary.

READ FULL TEXT

page 5

page 6

page 8

page 9

page 10

research
12/15/2017

Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning

Deep learning models have achieved high performance on many tasks, and t...
research
10/31/2019

Adversarial Music: Real World Audio Adversary Against Wake-word Detection System

Voice Assistants (VAs) such as Amazon Alexa or Google Assistant rely on ...
research
03/26/2021

Combating Adversaries with Anti-Adversaries

Deep neural networks are vulnerable to small input perturbations known a...
research
08/02/2019

AdvGAN++ : Harnessing latent layers for adversary generation

Adversarial examples are fabricated examples, indistinguishable from the...
research
03/05/2020

Search Space of Adversarial Perturbations against Image Filters

The superiority of deep learning performance is threatened by safety iss...
research
03/18/2019

The epidemiology of lateral movement: exposures and countermeasures with network contagion models

An approach is developed for analyzing computer networks to identify sys...
research
03/29/2018

Protection against Cloning for Deep Learning

The susceptibility of deep learning to adversarial attack can be underst...

Please sign up or login with your details

Forgot password? Click here to reset