Pipe Overflow: Smashing Voice Authentication for Fun and Profit

02/06/2022
by   Shimaa Ahmed, et al.
6

Recent years have seen a surge of popularity of acoustics-enabled personal devices powered by machine learning. Yet, machine learning has proven to be vulnerable to adversarial examples. Large number of modern systems protect themselves against such attacks by targeting the artificiality, i.e., they deploy mechanisms to detect the lack of human involvement in generating the adversarial examples. However, these defenses implicitly assume that humans are incapable of producing meaningful and targeted adversarial examples. In this paper, we show that this base assumption is wrong. In particular, we demonstrate that for tasks like speaker identification, a human is capable of producing analog adversarial examples directly with little cost and supervision: by simply speaking through a tube, an adversary reliably impersonates other speakers in eyes of ML models for speaker identification. Our findings extend to a range of other acoustic-biometric tasks such as liveness, bringing into question their use in security-critical settings in real life, such as phone banking.

READ FULL TEXT

page 12

page 14

page 15

page 18

page 25

page 26

page 28

research
12/30/2019

Defending from adversarial examples with a two-stream architecture

In recent years, deep learning has shown impressive performance on many ...
research
07/08/2016

Adversarial examples in the physical world

Most existing machine learning classifiers are highly vulnerable to adve...
research
08/03/2021

On the Exploitability of Audio Machine Learning Pipelines to Surreptitious Adversarial Examples

Machine learning (ML) models are known to be vulnerable to adversarial e...
research
10/30/2020

Capture the Bot: Using Adversarial Examples to Improve CAPTCHA Robustness to Bot Attacks

To this date, CAPTCHAs have served as the first line of defense preventi...
research
01/10/2018

Fooling End-to-end Speaker Verification by Adversarial Examples

Automatic speaker verification systems are increasingly used as the prim...
research
09/02/2020

Adversarial Attacks on Deep Learning Systems for User Identification based on Motion Sensors

For the time being, mobile devices employ implicit authentication mechan...
research
05/15/2017

Extending Defensive Distillation

Machine learning is vulnerable to adversarial examples: inputs carefully...

Please sign up or login with your details

Forgot password? Click here to reset