Crafting Adversarial Examples For Speech Paralinguistics Applications
Computational paralinguistic analysis is increasingly being used in a wide range of applications, including security-sensitive applications such as speaker verification, deceptive speech detection, and medical diagnosis. While state-of-the-art machine learning techniques, such as deep neural networks, can provide robust and accurate speech analysis, they are susceptible to adversarial attacks. In this work, we propose a novel end-to-end scheme to generate adversarial examples by perturbing directly the raw waveform of an audio recording rather than specific acoustic features. Our experiments show that the proposed adversarial perturbation can lead to a significant performance drop of state-of-the-art deep neural networks, while only minimally impairing the audio quality.
READ FULL TEXT