Watch What You Pretrain For: Targeted, Transferable Adversarial Examples on Self-Supervised Speech Recognition models

09/17/2022
by   Raphael Olivier, et al.
0

A targeted adversarial attack produces audio samples that can force an Automatic Speech Recognition (ASR) system to output attacker-chosen text. To exploit ASR models in real-world, black-box settings, an adversary can leverage the transferability property, i.e. that an adversarial sample produced for a proxy ASR can also fool a different remote ASR. However recent work has shown that transferability against large ASR models is very difficult. In this work, we show that modern ASR architectures, specifically ones based on Self-Supervised Learning, are in fact vulnerable to transferability. We successfully demonstrate this phenomenon by evaluating state-of-the-art self-supervised ASR models like Wav2Vec2, HuBERT, Data2Vec and WavLM. We show that with low-level additive noise achieving a 30dB Signal-Noise Ratio, we can achieve target transferability with up to 80 ablation study to show that Self-Supervised learning is the main cause of that phenomenon, and 2) we provide an explanation for this phenomenon. Through this we show that modern ASR architectures are uniquely vulnerable to adversarial security threats.

READ FULL TEXT
research
05/23/2023

On the Transferability of Whisper-based Representations for "In-the-Wild" Cross-Task Downstream Speech Applications

Large self-supervised pre-trained speech models have achieved remarkable...
research
06/05/2020

Defense for Black-box Attacks on Anti-spoofing Models by Self-Supervised Learning

High-performance anti-spoofing models for automatic speaker verification...
research
04/13/2021

EAT: Enhanced ASR-TTS for Self-supervised Speech Recognition

Self-supervised ASR-TTS models suffer in out-of-domain data conditions. ...
research
03/29/2022

Recent improvements of ASR models in the face of adversarial attacks

Like many other tasks involving neural networks, Speech Recognition mode...
research
11/04/2018

Adversarial Black-Box Attacks for Automatic Speech Recognition Systems Using Multi-Objective Genetic Optimization

Fooling deep neural networks with adversarial input have exposed a signi...
research
10/26/2022

There is more than one kind of robustness: Fooling Whisper with adversarial examples

Whisper is a recent Automatic Speech Recognition (ASR) model displaying ...
research
04/30/2019

Self-supervised Sequence-to-sequence ASR using Unpaired Speech and Text

Sequence-to-sequence ASR models require large quantities of data to atta...

Please sign up or login with your details

Forgot password? Click here to reset