Integrity Fingerprinting of DNN with Double Black-box Design and Verification

03/21/2022
by   Shuo Wang, et al.
0

Cloud-enabled Machine Learning as a Service (MLaaS) has shown enormous promise to transform how deep learning models are developed and deployed. Nonetheless, there is a potential risk associated with the use of such services since a malicious party can modify them to achieve an adverse result. Therefore, it is imperative for model owners, service providers, and end-users to verify whether the deployed model has not been tampered with or not. Such verification requires public verifiability (i.e., fingerprinting patterns are available to all parties, including adversaries) and black-box access to the deployed model via APIs. Existing watermarking and fingerprinting approaches, however, require white-box knowledge (such as gradient) to design the fingerprinting and only support private verifiability, i.e., verification by an honest party. In this paper, we describe a practical watermarking technique that enables black-box knowledge in fingerprint design and black-box queries during verification. The service ensures the integrity of cloud-based services through public verification (i.e. fingerprinting patterns are available to all parties, including adversaries). If an adversary manipulates a model, this will result in a shift in the decision boundary. Thus, the underlying principle of double-black watermarking is that a model's decision boundary could serve as an inherent fingerprint for watermarking. Our approach captures the decision boundary by generating a limited number of encysted sample fingerprints, which are a set of naturally transformed and augmented inputs enclosed around the model's decision boundary in order to capture the inherent fingerprints of the model. We evaluated our watermarking approach against a variety of model integrity attacks and model compression attacks.

READ FULL TEXT

page 9

page 13

page 18

research
08/09/2018

VerIDeep: Verifying Integrity of Deep Neural Networks through Sensitive-Sample Fingerprinting

Deep learning has become popular, and numerous cloud-based services are ...
research
07/13/2023

Towards Traitor Tracing in Black-and-White-Box DNN Watermarking with Tardos-based Codes

The growing popularity of Deep Neural Networks, which often require comp...
research
05/22/2019

A framework for the extraction of Deep Neural Networks by leveraging public data

Machine learning models trained on confidential datasets are increasingl...
research
05/30/2022

Integrity Authentication in Tree Models

Tree models are very widely used in practice of machine learning and dat...
research
10/20/2021

QoS-based Trust Evaluation for Data Services as a Black Box

This paper proposes a QoS-based trust evaluation model for black box dat...
research
12/03/2019

A Study of Black Box Adversarial Attacks in Computer Vision

Machine learning has seen tremendous advances in the past few years whic...
research
10/01/2018

Privado: Practical and Secure DNN Inference

Recently, cloud providers have extended support for trusted hardware pri...

Please sign up or login with your details

Forgot password? Click here to reset