Black-box Adversarial Attacks on Commercial Speech Platforms with Minimal Information

10/19/2021
by   Baolin Zheng, et al.
0

Adversarial attacks against commercial black-box speech platforms, including cloud speech APIs and voice control devices, have received little attention until recent years. The current "black-box" attacks all heavily rely on the knowledge of prediction/confidence scores to craft effective adversarial examples, which can be intuitively defended by service providers without returning these messages. In this paper, we propose two novel adversarial attacks in more practical and rigorous scenarios. For commercial cloud speech APIs, we propose Occam, a decision-only black-box adversarial attack, where only final decisions are available to the adversary. In Occam, we formulate the decision-only AE generation as a discontinuous large-scale global optimization problem, and solve it by adaptively decomposing this complicated problem into a set of sub-problems and cooperatively optimizing each one. Our Occam is a one-size-fits-all approach, which achieves 100 an average SNR of 14.23dB, on a wide range of popular speech and speaker recognition APIs, including Google, Alibaba, Microsoft, Tencent, iFlytek, and Jingdong, outperforming the state-of-the-art black-box attacks. For commercial voice control devices, we propose NI-Occam, the first non-interactive physical adversarial attack, where the adversary does not need to query the oracle and has no access to its internal information and training data. We combine adversarial attacks with model inversion attacks, and thus generate the physically-effective audio AEs with high transferability without any interaction with target devices. Our experimental results show that NI-Occam can successfully fool Apple Siri, Microsoft Cortana, Google Assistant, iFlytek and Amazon Echo with an average SRoA of 52 on non-interactive physical attacks against voice control devices.

READ FULL TEXT
research
09/13/2023

PhantomSound: Black-Box, Query-Efficient Audio Adversarial Attack via Split-Second Phoneme Injection

In this paper, we propose PhantomSound, a query-efficient black-box atta...
research
11/03/2019

Who is Real Bob? Adversarial Attacks on Speaker Recognition Systems

Speaker recognition (SR) is widely used in our daily life as a biometric...
research
09/19/2020

EI-MTD:Moving Target Defense for Edge Intelligence against Adversarial Attacks

With the boom of edge intelligence, its vulnerability to adversarial att...
research
05/23/2023

QFA2SR: Query-Free Adversarial Transfer Attacks to Speaker Recognition Systems

Current adversarial attacks against speaker recognition systems (SRSs) r...
research
01/23/2019

SirenAttack: Generating Adversarial Audio for End-to-End Acoustic Systems

Despite their immense popularity, deep learning-based acoustic systems a...
research
07/14/2020

Adversarial Attacks against Neural Networks in Audio Domain: Exploiting Principal Components

Adversarial attacks are inputs that are similar to original inputs but a...
research
06/28/2023

Enrollment-stage Backdoor Attacks on Speaker Recognition Systems via Adversarial Ultrasound

Automatic Speaker Recognition Systems (SRSs) have been widely used in vo...

Please sign up or login with your details

Forgot password? Click here to reset