VerIDeep: Verifying Integrity of Deep Neural Networks through Sensitive-Sample Fingerprinting

08/09/2018
by   Zecheng He, et al.
0

Deep learning has become popular, and numerous cloud-based services are provided to help customers develop and deploy deep learning applications. Meanwhile, various attack techniques have also been discovered to stealthily compromise the model's integrity. When a cloud customer deploys a deep learning model in the cloud and serves it to end-users, it is important for him to be able to verify that the deployed model has not been tampered with, and the model's integrity is protected. We propose a new low-cost and self-served methodology for customers to verify that the model deployed in the cloud is intact, while having only black-box access (e.g., via APIs) to the deployed model. Customers can detect arbitrary changes to their deep learning models. Specifically, we define Sensitive-Sample fingerprints, which are a small set of transformed inputs that make the model outputs sensitive to the model's parameters. Even small weight changes can be clearly reflected in the model outputs, and observed by the customer. Our experiments on different types of model integrity attacks show that we can detect model integrity breaches with high accuracy (>99%) and low overhead (<10 black-box model accesses).

READ FULL TEXT

page 9

page 10

page 11

page 12

research
03/21/2022

Integrity Fingerprinting of DNN with Double Black-box Design and Verification

Cloud-enabled Machine Learning as a Service (MLaaS) has shown enormous p...
research
07/07/2020

Monitoring Browsing Behavior of Customers in Retail Stores via RFID Imaging

In this paper, we propose to use commercial off-the-shelf (COTS) monosta...
research
05/09/2022

Verifying Integrity of Deep Ensemble Models by Lossless Black-box Watermarking with Sensitive Samples

With the widespread use of deep neural networks (DNNs) in many areas, mo...
research
05/13/2023

Decision-based iterative fragile watermarking for model integrity verification

Typically, foundation models are hosted on cloud servers to meet the hig...
research
06/15/2021

Code Integrity Attestation for PLCs using Black Box Neural Network Predictions

Cyber-physical systems (CPSs) are widespread in critical domains, and si...
research
01/18/2021

DeepPayload: Black-box Backdoor Attack on Deep Learning Models through Neural Payload Injection

Deep learning models are increasingly used in mobile applications as cri...
research
01/29/2023

Deep Learning model integrity checking mechanism using watermarking technique

In response to the growing popularity of Machine Learning (ML) technique...

Please sign up or login with your details

Forgot password? Click here to reset