Adversarial Attacks against Neural Networks in Audio Domain: Exploiting Principal Components

07/14/2020
by   Ken Alparslan, et al.
0

Adversarial attacks are inputs that are similar to original inputs but altered on purpose. Speech-to-text neural networks that are widely used today are prone to misclassify adversarial attacks. In this study, first, we investigate the presence of targeted adversarial attacks by altering wave forms from Common Voice data set. We craft adversarial wave forms via Connectionist Temporal Classification Loss Function, and attack DeepSpeech, a speech-to-text neural network implemented by Mozilla. We achieve 100 (zero successful classification by DeepSpeech) on all 25 adversarial wave forms that we crafted. Second, we investigate the use of PCA as a defense mechanism against adversarial attacks. We reduce dimensionality by applying PCA to these 25 attacks that we created and test them against DeepSpeech. We observe zero successful classification by DeepSpeech, which suggests PCA is not a good defense mechanism in audio domain. Finally, instead of using PCA as a defense mechanism, we use PCA this time to craft adversarial inputs under a black-box setting with minimal adversarial knowledge. With no knowledge regarding the model, parameters, or weights, we craft adversarial attacks by applying PCA to samples from Common Voice data set and achieve 100 black-box setting again when tested against DeepSpeech. We also experiment with different percentage of components necessary to result in a classification during attacking process. In all cases, adversary becomes successful.

READ FULL TEXT

page 1

page 5

research
08/12/2020

Model Robustness with Text Classification: Semantic-preserving adversarial attacks

We propose algorithms to create adversarial attacks to assess model robu...
research
09/19/2020

EI-MTD:Moving Target Defense for Edge Intelligence against Adversarial Attacks

With the boom of edge intelligence, its vulnerability to adversarial att...
research
01/31/2022

Boundary Defense Against Black-box Adversarial Attacks

Black-box adversarial attacks generate adversarial samples via iterative...
research
04/28/2023

The Power of Typed Affine Decision Structures: A Case Study

TADS are a novel, concise white-box representation of neural networks. I...
research
10/19/2021

Black-box Adversarial Attacks on Commercial Speech Platforms with Minimal Information

Adversarial attacks against commercial black-box speech platforms, inclu...
research
11/03/2019

Who is Real Bob? Adversarial Attacks on Speaker Recognition Systems

Speaker recognition (SR) is widely used in our daily life as a biometric...
research
02/18/2021

Random Projections for Improved Adversarial Robustness

We propose two training techniques for improving the robustness of Neura...

Please sign up or login with your details

Forgot password? Click here to reset