FBI: Fingerprinting models with Benign Inputs

08/05/2022
by   Thibault Maho, et al.
0

Recent advances in the fingerprinting of deep neural networks detect instances of models, placed in a black-box interaction scheme. Inputs used by the fingerprinting protocols are specifically crafted for each precise model to be checked for. While efficient in such a scenario, this nevertheless results in a lack of guarantee after a mere modification (like retraining, quantization) of a model. This paper tackles the challenges to propose i) fingerprinting schemes that are resilient to significant modifications of the models, by generalizing to the notion of model families and their variants, ii) an extension of the fingerprinting task encompassing scenarios where one wants to fingerprint not only a precise model (previously referred to as a detection task) but also to identify which model family is in the black-box (identification task). We achieve both goals by demonstrating that benign inputs, that are unmodified images, for instance, are sufficient material for both tasks. We leverage an information-theoretic scheme for the identification task. We devise a greedy discrimination algorithm for the detection task. Both approaches are experimentally validated over an unprecedented set of more than 1,000 networks.

READ FULL TEXT

page 15

page 26

page 27

research
07/13/2023

Towards Traitor Tracing in Black-and-White-Box DNN Watermarking with Tardos-based Codes

The growing popularity of Deep Neural Networks, which often require comp...
research
06/24/2020

Blacklight: Defending Black-Box Adversarial Attacks on Deep Neural Networks

The vulnerability of deep neural networks (DNNs) to adversarial examples...
research
03/05/2019

DeepStego: Protecting Intellectual Property of Deep Neural Networks by Steganography

Deep Neural Networks (DNNs) has shown great success in various challengi...
research
10/09/2017

Enhancing Transparency of Black-box Soft-margin SVM by Integrating Data-based Prior Information

The lack of transparency often makes the black-box models difficult to b...
research
02/13/2018

Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring

Deep Neural Networks have recently gained lots of success after enabling...
research
11/20/2019

Outside the Box: Abstraction-Based Monitoring of Neural Networks

Neural networks have demonstrated unmatched performance in a range of cl...
research
04/22/2021

Closing Bell: Boxing black box simulations in the resource theory of contextuality

This chapter contains an exposition of the sheaf-theoretic framework for...

Please sign up or login with your details

Forgot password? Click here to reset