End-to-End Adversarial White Box Attacks on Music Instrument Classification

07/29/2020
by   Katharina Prinz, et al.
0

Small adversarial perturbations of input data are able to drastically change performance of machine learning systems, thereby challenging the validity of such systems. We present the very first end-to-end adversarial attacks on a music instrument classification system allowing to add perturbations directly to audio waveforms instead of spectrograms. Our attacks are able to reduce the accuracy close to a random baseline while at the same time keeping perturbations almost imperceptible and producing misclassifications to any desired instrument.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/22/2019

Cross-Representation Transferability of Adversarial Perturbations: From Spectrograms to Audio Waveforms

This paper shows the susceptibility of spectrogram-based audio classifie...
research
05/24/2022

Defending a Music Recommender Against Hubness-Based Adversarial Attacks

Adversarial attacks can drastically degrade performance of recommenders ...
research
01/04/2023

Validity in Music Information Research Experiments

Validity is the truth of an inference made from evidence, such as data c...
research
12/20/2019

secml: A Python Library for Secure and Explainable Machine Learning

We present secml, an open-source Python library for secure and explainab...
research
03/03/2021

A Robust Adversarial Network-Based End-to-End Communications System With Strong Generalization Ability Against Adversarial Attacks

We propose a novel defensive mechanism based on a generative adversarial...
research
03/26/2021

MagDR: Mask-guided Detection and Reconstruction for Defending Deepfakes

Deepfakes raised serious concerns on the authenticity of visual contents...

Please sign up or login with your details

Forgot password? Click here to reset