Robustifying automatic speech recognition by extracting slowly varying features

12/14/2021
by   Matias Pizarro, et al.
0

In the past few years, it has been shown that deep learning systems are highly vulnerable under attacks with adversarial examples. Neural-network-based automatic speech recognition (ASR) systems are no exception. Targeted and untargeted attacks can modify an audio input signal in such a way that humans still recognise the same words, while ASR systems are steered to predict a different transcription. In this paper, we propose a defense mechanism against targeted adversarial attacks consisting in removing fast-changing features from the audio signals, either by applying slow feature analysis, a low-pass filter, or both, before feeding the input to the ASR system. We perform an empirical analysis of hybrid ASR models trained on data pre-processed in such a way. While the resulting models perform quite well on benign data, they are significantly more robust against targeted adversarial attacks: Our final, proposed model shows a performance on clean data similar to the baseline model, while being more than four times more robust.

READ FULL TEXT
research
03/26/2020

Defending Adversarial Attacks on Cloud-aided Automatic Speech Recognition Systems

With the advancement of deep learning based speech recognition technolog...
research
11/03/2022

Leveraging Domain Features for Detecting Adversarial Attacks Against Deep Speech Recognition in Noise

In recent years, significant progress has been made in deep model-based ...
research
10/21/2020

VENOMAVE: Clean-Label Poisoning Against Speech Recognition

In the past few years, we observed a wide adoption of practical systems ...
research
05/24/2020

Detecting Adversarial Examples for Speech Recognition via Uncertainty Quantification

Machine learning systems and also, specifically, automatic speech recogn...
research
02/01/2022

Language Dependencies in Adversarial Attacks on Speech Recognition Systems

Automatic speech recognition (ASR) systems are ubiquitously present in o...
research
01/02/2018

Did you hear that? Adversarial Examples Against Automatic Speech Recognition

Speech is a common and effective way of communication between humans, an...
research
04/08/2022

Defense against Adversarial Attacks on Hybrid Speech Recognition using Joint Adversarial Fine-tuning with Denoiser

Adversarial attacks are a threat to automatic speech recognition (ASR) s...

Please sign up or login with your details

Forgot password? Click here to reset