Efficient Defense Against Model Stealing Attacks on Convolutional Neural Networks

09/04/2023
by   Kacem Khaled, et al.
0

Model stealing attacks have become a serious concern for deep learning models, where an attacker can steal a trained model by querying its black-box API. This can lead to intellectual property theft and other security and privacy risks. The current state-of-the-art defenses against model stealing attacks suggest adding perturbations to the prediction probabilities. However, they suffer from heavy computations and make impracticable assumptions about the adversary. They often require the training of auxiliary models. This can be time-consuming and resource-intensive which hinders the deployment of these defenses in real-world applications. In this paper, we propose a simple yet effective and efficient defense alternative. We introduce a heuristic approach to perturb the output probabilities. The proposed defense can be easily integrated into models without additional training. We show that our defense is effective in defending against three state-of-the-art stealing attacks. We evaluate our approach on large and quantized (i.e., compressed) Convolutional Neural Networks (CNNs) trained on several vision datasets. Our technique outperforms the state-of-the-art defenses with a ×37 faster inference latency without requiring any additional model and with a low impact on the model's performance. We validate that our defense is also effective for quantized CNNs targeting edge devices.

READ FULL TEXT
research
08/02/2023

Isolation and Induction: Training Robust Deep Neural Networks against Model Stealing Attacks

Despite the broad application of Machine Learning models as a Service (M...
research
04/19/2021

LAFEAT: Piercing Through Adversarial Defenses with Latent Features

Deep convolutional neural networks are susceptible to adversarial attack...
research
06/11/2022

NeuGuard: Lightweight Neuron-Guided Defense against Membership Inference Attacks

Membership inference attacks (MIAs) against machine learning models can ...
research
06/26/2019

Prediction Poisoning: Utility-Constrained Defenses Against Model Stealing Attacks

With the advances of ML models in recent years, we are seeing an increas...
research
09/12/2023

Exploring Non-additive Randomness on ViT against Query-Based Black-Box Attacks

Deep Neural Networks can be easily fooled by small and imperceptible per...
research
02/24/2022

Towards Effective and Robust Neural Trojan Defenses via Input Filtering

Trojan attacks on deep neural networks are both dangerous and surreptiti...
research
09/05/2022

An Adaptive Black-box Defense against Trojan Attacks (TrojDef)

Trojan backdoor is a poisoning attack against Neural Network (NN) classi...

Please sign up or login with your details

Forgot password? Click here to reset