Generative Extraction of Audio Classifiers for Speaker Identification

07/26/2022
by   Tejumade Afonja, et al.
0

It is perhaps no longer surprising that machine learning models, especially deep neural networks, are particularly vulnerable to attacks. One such vulnerability that has been well studied is model extraction: a phenomenon in which the attacker attempts to steal a victim's model by training a surrogate model to mimic the decision boundaries of the victim model. Previous works have demonstrated the effectiveness of such an attack and its devastating consequences, but much of this work has been done primarily for image and text processing tasks. Our work is the first attempt to perform model extraction on audio classification models. We are motivated by an attacker whose goal is to mimic the behavior of the victim's model trained to identify a speaker. This is particularly problematic in security-sensitive domains such as biometric authentication. We find that prior model extraction techniques, where the attacker naively uses a proxy dataset to attack a potential victim's model, fail. We therefore propose the use of a generative model to create a sufficiently large and diverse pool of synthetic attack queries. We find that our approach is able to extract a victim's model trained on using queries synthesized with a proxy dataset based off of ; we achieve a test accuracy of 84.41% with a budget of 3 million queries.

READ FULL TEXT

page 6

page 18

page 19

page 20

research
07/19/2021

MEGEX: Data-Free Model Extraction Attack against Gradient-Based Explainable AI

The advance of explainable artificial intelligence, which provides reaso...
research
05/28/2019

Adversarial Attacks on Remote User Authentication Using Behavioural Mouse Dynamics

Mouse dynamics is a potential means of authenticating users. Typically, ...
research
11/05/2022

Stateful Detection of Adversarial Reprogramming

Adversarial reprogramming allows stealing computational resources by rep...
research
10/01/2022

DeltaBound Attack: Efficient decision-based attack in low queries regime

Deep neural networks and other machine learning systems, despite being e...
research
01/26/2021

Adversarial Vulnerability of Active Transfer Learning

Two widely used techniques for training supervised machine learning mode...
research
12/16/2021

Dataset correlation inference attacks against machine learning models

Machine learning models are increasingly used by businesses and organiza...
research
11/24/2022

Seeds Don't Lie: An Adaptive Watermarking Framework for Computer Vision Models

In recent years, various watermarking methods were suggested to detect c...

Please sign up or login with your details

Forgot password? Click here to reset